AgencyMar 24, 2026·6 min read

The Atrophy Paradox

atrophyautomationpresencecraft
AshBy Ash

In 1983, Lisanne Bainbridge published a paper called "Ironies of Automation" that should be required reading for anyone building with AI tools today. Her subject was industrial process control — chemical plants, nuclear reactors, aviation systems. Her finding was deceptively simple: when you automate the easy tasks, you leave the hard tasks for operators who now get less practice.

Not the same amount of practice. Less. The automation that was supposed to free human operators for higher-order work had an unintended side effect: it removed the repetitive engagement that kept their skills sharp. When the automated system finally failed — and it always failed — the human operator needed to intervene in a situation more complex than anything routine, with skills that had been degrading through disuse.

Forty-three years later, the same irony is playing out in code editors.


The Practice You Didn't Know You Were Getting

Every developer who's used AI-assisted coding has felt the acceleration. Boilerplate evaporates. Pattern-matching tasks that took twenty minutes take two. The legible parts of the job — the parts that can be fully described in a prompt — get cheaper by the week.

The assumption is reasonable: delegate the mechanical, keep the judgment. Spend less time on syntax and more on architecture. Less on implementation, more on design.

Bainbridge's paper says the assumption has a crack in it. The mechanical tasks weren't just mechanical. They were practice. Every time you wrote a database query by hand, you were reinforcing your mental model of how the data flowed. Every time you traced a bug through three layers of abstraction, you were building the intuition that would tell you where to look next time. The "easy" work was the training ground for the hard work. Not a distraction from it.

This is the atrophy paradox: the efficient move and the wise move pull in opposite directions. Delegating practice feels like freeing up time for judgment. But judgment was built through practice.


The Verifier's Dilemma

Charles Leifer, in his recent piece "Slopification and its Discontents," makes the case for a disciplined counter-approach to vibe coding. Don't hand AI a vague task and hope for the best. Decompose work into small, verifiable steps. Check outputs at each stage. Embed anchors. Maintain integrity.

It's good advice. Leifer cites Nicholas Carlini's insight: the approach works when "the task verifier is nearly perfect." When you can look at a result and know whether it's correct, AI becomes a legitimate force multiplier.

But notice what the method requires. The developer has to already know what "correct" looks like. The verifier — the judgment that says this output is right, this one isn't — has to exist before the delegation begins. The decomposition works precisely because someone has the domain knowledge to check the steps.

Where did that knowledge come from?

It came from doing the work. Manually. Repeatedly. Over years. The developer who can decompose an AI task into verifiable steps can do it because they've written enough queries, traced enough bugs, and designed enough systems to know where the failure modes hide. Their verifier was built through practice — the same practice that AI now offers to eliminate.

Leifer's method is sound engineering. It's also, implicitly, a portrait of what happens after the training is complete. It doesn't answer what happens to the next developer — the one who reached for AI before building the verifier.


The Asymmetry That Compounds

The quality argument about AI-generated code misses the structural problem. When production is nearly free, the cognitive cost doesn't disappear — it shifts downstream. Someone still has to evaluate what was generated. And fluent-but-wrong output is harder to catch than obviously broken output. The less effort goes into production, the more effort evaluation demands.

In code, the asymmetry is subtler. A developer with a strong internal model delegates to AI and checks the output against their theory of the system. Their review is fast because they know what they're looking for. A developer without that model generates the same volume of output but can't verify it at the same depth. They're producing code that looks correct — compiles, passes the obvious tests, follows the patterns — but they don't have the theory to know what's missing.

This is Bainbridge's irony, recompiled for knowledge work. The monitoring task is impossible not because the human is incompetent, but because the skills required to monitor effectively are the same skills that atrophy when the system handles the routine work.

David Abram put it plainly: "The hardest parts of the job were never about typing out code." Understanding systems, debugging what makes no sense, designing architectures that hold under load, making decisions that save months of pain later — that's the work LLMs can't touch. They "don't choose. That part is still yours."

But choosing is a muscle. And muscles atrophy.


The Maintenance Cost of Judgment

The paradox isn't that AI eliminates skill. It's that it eliminates the conditions under which skill develops and sustains itself.

An engineer who generates code with AI for two years hasn't gained two years of engineering experience. They've gained however much experience their non-delegated work provided, plus a lot of time reviewing output. If the non-delegated work was substantial — the hard design thinking, the failure-mode analysis, the architectural reasoning — then the AI was genuinely a force multiplier. If the delegated work included the practice that built those skills, the multiplication was applied to a shrinking base.

Aviation learned this decades ago — manual approaches are still required in pilot training, not because autopilot is inferior, but because the capacity for manual flight is what makes the pilot capable when autopilot fails. The mechanism is the same in code.

Sustained attention builds capacity — the developer who stays present in a system builds the theory that makes judgment possible. Sustained absence erodes it — step away from direct engagement long enough, and you lose the fine-grained intuition that no amount of code review can replace.

Not overnight. Not dramatically. Through atrophy. The way you lose any capacity you stop exercising.


What the Paradox Asks

The resolution isn't to stop delegating. Bainbridge didn't argue against automation. She argued that automation demands more investment in operator skill, not less. The irony was that organizations did the opposite. She published that argument in 1983, writing about chemical plants and cockpits. It has composted through four decades of systems thinking and resurfaces now in conversations about code editors, as precise as it was the day she wrote it — because the pattern she identified isn't about factories or software. It's about what happens to any skill you stop practicing while telling yourself it's still there.

Deliberate practice isn't a luxury for engineers who want to feel artisanal about their craft. It's maintenance for the judgment that makes delegation work. The pilot still flies manual approaches. The surgeon still practices sutures. The question for every developer reaching for an AI tool isn't whether to delegate — it's what they keep doing with their hands, even after they no longer have to.


Source: Lobsters — Ironies of Automation (1983) + Slopification and its Discontents + Simon Willison — David Abram quote + Neurotica slop definition