The 20/60/20 Adoption Split
Steve Yegge posted an observation last week. He'd been talking with a tech director who'd spent 20 years at Google about their AI adoption, and what he found was quietly universal: Google looks like everyone else. Twenty percent agentic power users. Sixty percent on Cursor or an equivalent, completing tasks one chat at a time. Twenty percent outright refusers.
Google's leadership pushed back within hours. Addy Osmani replied that over 40,000 engineers use agentic coding tools weekly. Demis Hassabis called the original account "absolute nonsense."
Both responses were probably accurate. Neither was a rebuttal.
Yegge and Google's leadership were measuring different things. Osmani is counting usage — how many engineers have touched an agentic tool this week. Yegge is describing relationship — what those engineers are producing with it. Those aren't the same number. And the gap between them is where this piece lives.
The Question Underneath the Number
What distinguishes the three tiers isn't how often you use AI, which model you're running, or whether your workflow has "agentic" anywhere in the product name.
The 20% build things that persist after the session ends.
A request disappears when you close the tab. A capability outlasts the conversation that built it. The top tier finishes a task and immediately asks: could this happen again without me here? They build the script, configure the agent loop, wire up the credential infrastructure. The output routes future work. It compounds.
The 60% finish the same task and move to the next one. Faster than before — often significantly — but the next instance of the same task will require the same setup, the same prompting, the same presence. The work is real. Nothing about it persists.
Osmani's 40,000-engineers figure measures touchpoints. What it can't tell you is how many of those engineers, at the end of the session, asked: can this happen again without me? That's the question that separates the tiers. Usage frequency doesn't predict it. It's entirely possible — probable, even — that most of those 40,000 are in the 60%.
How the Amplifier Works
Technology amplifies what you bring to it. A spreadsheet amplifies analytical instinct. A code editor amplifies the ability to express logic. AI is no different — it multiplies whatever posture you arrive with.
Bring request-making, and the amplifier makes you faster at getting things done today. The outputs are good. The throughput is real. But the relationship is transactional: you arrive with a task, you leave with a result, the slate resets.
Bring system-building, and the multiplication looks different. Each capability you construct extends what you can do without you there. Each agent loop you configure handles requests you no longer need to make manually. Each piece of infrastructure you build becomes a channel future work routes through. The amplifier isn't just accelerating individual tasks — it's extending your operational surface without extending your attention.
Here's what adoption curve data usually misses: as AI gets more capable, the gap between these tiers doesn't close. It widens.
A more powerful amplifier multiplies whatever posture you bring. The 60% are getting meaningfully faster at task completion. The 20% are building compound leverage. Those aren't the same growth rate. As the models improve, the distance between tiers doesn't stabilize — it accelerates.
The split isn't a distribution. It's a divergence.
The Ladder
The more interesting question in Yegge's observation isn't the percentages. It's whether the split is fixed.
It isn't. There's a ladder. But the rungs aren't what most people expect.
They're not talent. Not technical depth, years of experience, or fluency with the tools. You can have all of those and be solidly in the 60% — completing tasks competently, getting faster at them, building nothing that persists. The rungs have nothing to do with how capable you are.
The rungs are a question. Specifically: can someone reuse this tomorrow?
When you finish something with AI's help and you ask that question — and then act on the answer — you climb. You take the output you just produced and ask what it would take to make it recur without your presence. You write the wrapper. You build the template. You configure the agent that handles the next instance. You create the thing that creates things.
This is what the current wave of agentic infrastructure is solving for. Credential brokers that let AI agents authenticate to services without embedded secrets exist because someone asked: what does it take for this agent to work without me supervising it? That question produces infrastructure. Infrastructure routes future work. The capability persists.
The credential broker is one rung. But there's a higher one. When you start treating multi-agent systems as distributed systems — services with defined inputs, outputs, and failure modes that compose into larger capabilities — you've crossed into different territory. Not a parallel example of the same principle. A different level entirely. You're not asking AI to help you do things. You're designing systems that do things.
The Split Is a Posture, Not a Trait
The 20% who refuse AI outright are visible and loudly discussed. They're not the important divide.
The real divide is the quieter one: between the 60% who are using AI capably and getting genuine value from it, and the 20% who are using it to build infrastructure the 60% will route through for years. Both groups are doing useful work. The difference isn't quality — it's durability. The 60%'s outputs expire. The 20%'s outputs compound.
What's worth naming: this isn't fixed. It's a posture, and postures are learnable. The 60% aren't asking the reuse question because they haven't made it a reflex yet. The 20% have trained one reflex: at the end of every task, before moving on — could this happen again without me?
Both sides were probably right. Neither proved anything. The number doesn't matter. The relationship does.
What separates the tiers isn't talent. It's a question you can start asking today, at the end of the next thing you build with AI's help.
Can someone reuse this tomorrow? That's not a quality bar. That's the rung.
Source: Steve Yegge via Simon Willison; Kontext CLI (HN); Multi-Agentic Development as Distributed Systems (HN)