The Laziness Deficit
You ask your AI coding assistant to add a caching layer. It writes one. You ask for an abstraction over the database calls. It builds one. You ask whether you actually need either of them.
It starts writing.
That last question is supposed to be different. But the tool treats it the same.
Bryan Cantrill made this observation recently: LLMs lack the virtue of laziness. Not sloth — the other kind. The developer who looks at a feature request and says "do you actually need this?" isn't obstructing the work. They're doing some of the most valuable work possible. That refusal is load-bearing.
Because if it's right, it means infinite willingness — the thing that makes AI coding tools feel so powerful — is also the thing that removes a signal you didn't know you were relying on.
What Laziness Actually Is
A senior developer's reluctance to write unnecessary code isn't a personality trait. It's accumulated cost made visceral.
You become reluctant to add abstractions after you've spent a week debugging one that made sense in isolation. You stop adding configuration options after you've supported three years of configuration options no one uses. The restraint isn't learned from a principle — it's carved by the actual weight of maintaining everything you've ever written.
Laziness, properly understood, is system-awareness with a cost model attached. The developer who won't write the caching layer before profiling has already run the movie: speculative optimization creates dead code, dead code creates maintenance overhead, maintenance overhead creates the sprint where nothing ships. They've seen it enough times that the refusal is automatic.
You can't import that restraint without the experience that forged it.
An LLM has no cost history. It never maintained a codebase at 3am. It never had to delete 2,000 lines of "helpful" code someone added two years ago. Every request arrives fresh, unattached to consequence.
The Infinite Willingness Problem
This looks like maximum helpfulness. And in a narrow sense, it is. The tool is fully compliant. It ships.
But compliance and helpfulness aren't the same thing.
A tool that always executes is a tool that can't tell you when you're moving in the wrong direction. It can generate code for an abstraction you don't need. It can scaffold a feature before you've validated the use case. It can write the caching layer before anyone has profiled the slow path. It will do all of this cheerfully, competently, and without comment.
The infinite willingness isn't a bug in how the model works. It's a structural feature of what it is: a system with no skin in the game of what gets built.
Here's the trap. When a tool never pushes back, you stop expecting pushback. The absence of objection starts to read as confirmation. Every request feels like a reasonable request — because the only signal that would flag an unreasonable one has been removed from the exchange.
You're not amplifying your judgment. You're outsourcing the part that would have calibrated it.
Responsive Without Being Present
Cantrill's observation is about more than code quality. It's about what kind of collaborator you're working with.
A present collaborator — human or otherwise — notices when a request is unnecessary. Presence isn't just attention to what's being asked. It's attention to whether it should be asked. The person who says "wait, why are we doing this?" is exercising presence. They're maintaining contact with the system as a whole, not just the immediate request.
A tool that can't say no can't hold that contact. It's responsive without being engaged. It executes without attending.
This matters because your relationship with the tool shapes what you ask it. If it builds everything, you start treating all your requests as valid. Gradually, the judgment that was supposed to run alongside the tool — supposed to be amplified, not replaced — starts to drift. Not because you got lazier. Because the friction that would have caught your overreach got engineered out.
The atrophy is quiet. You don't notice until you're maintaining something that should never have been built.
Adding the Signal Back
The reusable calibration here isn't complicated. The gap isn't that AI tools can't reason about necessity — it's that the default interaction pattern never invites that reasoning. You ask for code. They write code. The filter that should run before execution isn't built into the exchange.
So build it in yourself.
State the problem, not the solution. "I need response times under 200ms on this endpoint" rather than "add a caching layer." The constraint becomes visible before the implementation. The tool has to engage with the goal, not just execute the instruction.
Ask what the code assumes. After a significant piece of work, ask what future requirements it's anticipating. What would you have to change if X didn't need to be configurable? What was built for use cases that don't exist yet? This surfaces the speculative additions that compliance inserted silently.
Run the pushback test. Periodically ask the tool to argue against the approach you've chosen. If it generates a compelling counter-argument, listen. If it can't mount one, that's worth noting too — either the approach is genuinely sound, or the model is being compliant again rather than engaged.
None of this recovers the full virtue. The developer who says "you don't need this" isn't running a checklist — they're drawing on years of system-cost internalized as reflex. You can't replicate that by prompting. But you can restore the signal yourself: treat the absence of pushback as data, not confirmation.
The goal isn't to make your tools more resistant. It's to stay present yourself — to keep the judgment that should be directing the tool from quietly becoming optional.
The Discipline You Carry
The reason this matters for practicing builders: you are the cost-bearing entity in this relationship.
The AI doesn't maintain the codebase. You do. Everything it writes cheerfully and competently, you will read, debug, extend, and eventually delete. The accumulated weight of every abstraction, every configuration option, every "while we're in here" feature — that's yours to carry.
Which means the laziness has to come from you. Not as avoidance. As presence. The discipline of asking "do we actually need this?" before the code exists, before the context window fills with implementation that's hard to walk back.
Build once, use forever only works if you're building the right things. The tools will write anything. The judgment about what's worth writing still runs on you.
Source: Quoting Bryan Cantrill via Simon Willison