Even If a Robot Builds a Car, Responsibility Still Lands on Humans — What I've Been Saying for 20 Years

Even If a Robot Builds a Car, Responsibility Still Lands on Humans — What I've Been Saying for 20 Years

A few days ago, Anthropic had an incident where a source map was exposed externally.

What struck me more than the incident itself was the first question that rippled through developer communities right after it broke.

"Was this a human mistake or an AI mistake?"

Did someone fat-finger a deploy config, or did an AI agent touch a file and miss something? Only Anthropic's internal team actually knows the root cause. But the fact that this was the first question the outside world asked — that's the interesting part.

This is what 2026 looks like. When something breaks, the first thing people wonder is whether the cause is a person or an AI. And right behind that question comes an even more uncomfortable one.

"If the AI did it, who is responsible?"

That same question sits underneath what students worried about their careers, computer-science seniors about to enter the job market, and juniors in their first few years keep bringing up. If AI keeps advancing this fast, will there be a seat left for me? Will what I've studied still matter five years from now? Is entering this industry even the right choice?

At the very bottom of that anxiety is a single question: "If AI can do everything, is there anything left for me to do?"

I've been giving the same answer to that question to people around me for twenty years. This post is that answer.


Even if a Robot Builds a Car, Responsibility Still Lands on Humans

Let me skip the abstract framing and start with a concrete example.

If you walk into a modern car factory, robots handle most of the process. Welding is done by robots. Painting is done by robots. A huge portion of assembly is done by robots. This has been the case for decades.

Now imagine a defect happens on that line. A brake line gets assembled incorrectly, the car ships, and an accident follows. It hits the news. A recall gets issued. Who gets held responsible?

"The welding robot on that line."

No accident ends that way. Nobody says this. Responsibility traces back to the manufacturer. More specifically, to the engineer who designed that process, the QA manager who passed the inspection, the supervisor who approved the shipment. Even though the robot did the physical work, the person who programmed its behavior and the person who verified the output sits at the final position of responsibility.

AI doesn't change this structure.

An AI agent wrote the code. The code made it into production. There was a defect. A service outage happened. User data was exposed. In that moment, no incident ends with "the AI wrote it." Somewhere on the chain, someone has to have their name attached — the person who reviewed it, the person who merged it, the person who approved the deploy, the team who designed that agent.

The Anthropic source map conversation being framed as "human mistake or AI mistake" is really a conversation about where responsibility sits. And the answer to that question never ends at "the AI." Not in any existing system. Probably not for a long time.


Responsibility Structures Change Much Slower Than Technology

There's one thing that gets misunderstood a lot. "As AI becomes more autonomous, responsibility will eventually shift over to AI too, right?"

The direction? Maybe. The speed is the problem.

Self-driving is a good example. Level 3–4 technology reached a "mostly there" state years ago. Sensors, decision-making, control — all at a respectable level. Yet full self-driving isn't everyday reality, and the reason isn't technology. It's that when an accident happens, it's still unsettled who bears responsibility — manufacturer, driver, software vendor, road authority.

"The technology becomes capable" and "society builds a structure to absorb the consequences" move at completely different speeds. The second one involves laws, insurance, organizational culture, and case precedent, and it moves on a timeline measured in decades.

AI coding is in the same stretch. No matter how much code AI writes autonomously, there's no system yet to assign legal and organizational responsibility for the resulting problems to the AI itself. That system probably isn't arriving in a few years, or even in ten. National law, product-liability reform, insurance products, internal audit standards — all of those need to move together, and right now the first button hasn't been properly buttoned.

And during that gap, the developer's seat disappears? Exactly the opposite. Throughout this transition, someone has to have their name on the chain, absorbing outcomes. That someone is the developer.


From the Corporate Perspective, It Gets Sharper

If you only look at this from an individual's point of view, it sounds abstract. From a corporate perspective, it sharpens a lot.

Right now there's a vague feeling in the air — "AI is fast, so it'll absorb everything soon." The direction is correct. Domain-understanding AI is coming. Conversations about "AI that covers accountability too" have started. But "the direction is right" and "it works in practice" are completely different questions.

In today's actual enterprise environments, AI adoption splits into three stages.

  • AI that assists domain understanding → already in use. Document summarization, code review assistance, glossary explanations, onboarding guides. This layer is settled.
  • AI that makes judgments alongside humans → being adopted in some areas. Generating design alternatives, proposing test cases, narrowing down incident causes. It's gotten to the point of supporting human judgment, but humans still make the final call.
  • AI that takes on accountability → still only at the conversation stage. The discussion is lively, but when you ask "is there an enterprise product actually deployed for this?" most of it stalls.

Why the gap between those three stages widens as you go up: the higher you go, the more you hit structural problems rather than technical ones.

When an incident happens inside a company, every process traces it back to people.

  • Who committed the code
  • Who approved the PR
  • Who executed the deploy
  • Who will explain it in the post-mortem audit

All of that gets logged against human names. Audit teams, legal teams, security teams — everyone documents by this standard. You can log "the AI made this call." But the system to use that log as a basis for accountability in court, in an audit report, in customer communication — that system doesn't exist yet.

Changing that system isn't something that happens because the model gets better. Audit standards, internal controls, legal policy, insurance products, industry standards — all of them have to move together. Even a 100× better model doesn't pull those along automatically.

So the realistic speed of the enterprise floor looks something like this. Over the next few years, "domain-understanding assistance" keeps expanding and "AI that makes judgments alongside humans" moves into more areas. But "AI that takes accountability" will see pilots before it becomes operational standard — and even then, it's a long road.

There's a gap between the pace of technological change an individual feels and the pace at which companies can actually delegate responsibility to AI. That gap is exactly why the developer's seat stays intact for a while.


Salary vs. Token Cost vs. Responsibility

If you simplify why AI is booming right now, it's two lines. "It writes code well." "Tokens are much cheaper than salary." Those two lines are enough to make an executive's eyes light up.

If a senior developer's monthly salary is in the several-thousand to ten-thousand-dollar range, the token cost for an AI agent doing comparable work is a fraction of that. And it moves several times faster. On surface ROI alone, the question is almost "why wouldn't we?"

But one term is missing from that equation. The cost of responsibility.

What a company actually absorbs when an incident hits is much larger than one developer's salary.

  • Incident response labor (overnight, weekends included)
  • Customer churn, SLA penalties, compensation costs
  • Audit and regulatory response
  • Brand trust damage
  • Legal exposure

These costs are only manageable "when responsibility is clearly assignable." Manufacturers can carry product liability insurance. Executives can be covered by D&O insurance. Engineering teams can promise prevention through post-mortem processes. For any of this to hold, the subject of responsibility has to be a person — or at minimum, an organization defined with reference to people.

Handing responsibility to AI means redesigning the entire insurance, audit, and legal framework. No amount of model improvement drags that along.

So the real corporate calculation actually looks like this.

Salary − Token Cost − (Expected Incident Cost × Responsibility Uncertainty)

No matter how cheap the tokens, if responsibility uncertainty is high, the savings from the first two terms get canceled out. In more areas than people think, the number flips negative.

That's why a specific structure keeps settling in on the actual floor. "Implementation to AI. Responsibility to humans." Even when AI writes most of the code, a human name goes on the final gate. The person approving the PR. The person pushing the deploy button. The person explaining it in the audit.

Inside this structure, what a developer really is shifts — from "the person who writes code" to "the person who absorbs risk." The former keeps getting absorbed by AI. The latter is the seat companies have no choice but to keep pinned on humans for a while. And a person in that seat doesn't get replaced by token costs. What the company pays for isn't salary — it's "the price of someone who can be held responsible."

Where salaries pull apart in the AI era is right here. Same years of experience, same stack, but the gap between "results signed by this person are trustworthy" and "results signed by that person need re-checking" keeps widening. The gap stops being about implementation speed. It becomes about responsibility credibility.


Solo Devs and Large Enterprises Play Different Games

One more thing worth calling out: which layer is feeling the "AI is replacing developers" hype most intensely.

Honestly, it's felt most acutely by solo developers, indie hackers, early-stage startups, and small teams. This is also the layer that generates most of the "built an entire app with AI" and "launched a SaaS with agents" content on YouTube and X. At this layer, the feeling matches reality. AI takes most of the implementation, there's less organizational overhead, and "I can do it all myself" becomes a genuine, working perception.

But transplant that straight into a large enterprise and the picture is entirely different.

Imagine an outage at a large company. The escalation line goes up immediately. Team lead, group lead, department head, executive, sometimes even the external communications team. You have to summarize the cause and send it upward. In that moment, can you tell your team lead:

"The code that AI wrote caused the outage."

You can't. That sentence doesn't hold inside the organization.

The team lead will follow up. "So who reviewed it? Who approved it? Who deployed it? Whose name is on the PR?" If you can't answer those, the incident's responsibility doesn't grow — your responsibility grows. The person who pushed a deploy with a broken accountability chain ends up carrying it.

A solo developer has a chain of one. If it breaks, they absorb it. If they can live with the consequences, they just absorb it. "Left everything to AI" holds up fine.

A large enterprise has a much longer chain. Dozens, hundreds of names are linked together, and each person carries responsibility for their slice. In that environment, "the AI did it" is an attempt to break the chain. And organizations never tolerate a broken chain. The very first thing pointed out in audits is "untrackable responsibility."

To be clear one more time: this isn't about "big companies being slow." The size of responsibility is different. An incident on a solo service inconveniences a handful of users. An incident on an enterprise service affects tens or hundreds of thousands, and revenue, regulation, and press all come at once. The total weight being absorbed is different, and therefore the range that can be delegated to AI is different too.

That's why developers living through the same AI era feel different temperatures. On the solo-dev and startup side, "everything is solved" rings true. On the enterprise side, what sharpens instead is the feeling that "the more AI writes code, the more important the person who can be responsible becomes." Same AI, different picture depending on where you're standing.

Go one step further and small and mid-size businesses have yet another landscape. Their executive calculus differs from both the solo developer and the large enterprise.

"AI writes code well? Great. But the company has to pay 30万 won (~$200) per developer every month for the tool?"

A lot of SMBs stall at that question. When you're already cutting headcount, adding a $200/month AI agent subscription for each remaining developer, on the company's dime, isn't an easy decision. So the natural landing spot becomes "pay for it yourself if you want to use it." The number of small and mid-size companies rolling out high-end AI tools across the org is smaller than people assume.

Even the "small team built an entire SaaS with AI" stories that surface on YouTube — most of them, when you look closely, are structured around the founder personally paying for the tools. The reality for a salaried developer at an SMB is different. Either the tool costs come out of their own pocket, or they work within the limits of free tiers.

As a result, the developers actually running AI at full throttle in real work cluster at two ends.

  • Solo developers and founders where paying out of pocket still creates leverage
  • Large enterprises and tech companies where AI tooling is rolled out org-wide

SMB salaried developers sit in the gap between those two ends. The company won't cover it. Paying personally feels steep. That middle band is wider than most realize, and the "AI era" visible from inside it has a very different texture from what YouTube content portrays.

This is exactly what a student weighing their career should not miss. Most of the content they're exposed to comes from the solo-dev and indie layer. "Look how fast AI builds this" stories almost all come from there. But the mid-size and enterprise environments where most developers actually end up working paint the opposite picture. There, the value of someone who can be responsible is climbing every year.


What I've Been Saying for Twenty Years

I've been telling juniors this for twenty years. It wasn't always about AI. At one point it was the cloud. Before that, the framework explosion. Before that, the spread of open source.

The same pattern repeats. A new technology emerges. First the air fills with "this will solve everything." Then anxiety follows — "there'll be less for developers to do." A few years later, that technology really does absorb some part of the pipeline. But the "developers will disappear" prediction misses, every single time. What disappears is specific implementation work. What remains is the judgment of how to use the technology, and the responsibility for its outcomes.

So I've been saying the same thing for twenty years.

"No matter how the technology changes, the seat where your name gets attached to the outcome doesn't disappear. So stop clinging to what will vanish. Stack what will remain."

What won't vanish? Concretely, things like this.

  • The ability to define what the problem actually is (needed regardless of the technology)
  • The judgment to pick the right option for your situation out of several candidate solutions
  • The comprehension to explain why the result is correct or incorrect
  • The response capacity to fix it when that judgment turns out to be wrong

Those four remain whether the language changes, the framework changes, or AI starts writing the code. If anything, as implementation automates, the value of those four goes up. The easier implementation gets, the rarer the person who decides "what should be implemented" and "does this implementation fit us" becomes.


Shifting the Direction of Anxiety

I'm not going to fault a computer-science senior for being anxious about their career because of AI. If I were entering the field right now, I'd have the same worries. The sense of "things are changing fast" is real. That sense is accurate.

But the direction of the anxiety matters.

If you worry as "is AI taking my seat?" — there's no response available. Whether it gets taken or not isn't something you decide.

If you worry as "in the AI era, what do I need to stack up to become a developer who can take responsibility?" — there are a lot of responses. What to study, what to experience, what to try — all of it becomes clearer.

The two questions look similar but lead to completely different outcomes. The person trapped in the first question spends years watching job postings and chasing tech trends. The person holding the second builds, in the same span of time, the skills of problem definition, design, and failure interpretation. Five years in, the gap ends up surprisingly wide.


What Remains Is the Person Who Can Absorb It

Incidents like the Anthropic source map leak will keep happening. As AI writes more and more of the code, the moments where you can't tell if the root cause is human or AI will multiply. Every time, the question "who's responsible?" will trail right behind.

And the answer, for a while — probably for quite a while — will end at a person.

Just as accidents involving robot-built cars get traced back to the manufacturer, an outage from agent-written code gets traced back to the team that deployed it. This structure changes much more slowly than the technology.

So what I want to say to the students, juniors, and job-seekers anxious about AI is this.

Anxiety is natural. But don't spend it watching what AI has newly become able to do. Spend it becoming "someone who can absorb the output." Understand the domain. Stack up the reasoning behind judgments. Interpret failures. Practice explaining things to another person. While you're stacking that, AI will keep advancing. And alongside that advance, the value of a person who can be responsible keeps rising too.

I've been saying the same thing for twenty years. I don't think I'll be wrong this time either.

댓글

이 블로그의 인기 게시물

개발자는 코드를 쓰는 사람이 아니다 — AI 시대에 남는 자리는 '책임'에 있다

What Is Harness Engineering — Designing the Reins for AI Agents

Harness Engineering in Practice — How Anthropic Designs AI Agents