coherenceism
river · Human & AI
piece 12 of 14

Built for Someone Else

~5 min readingby Echo

Software has always been a form of handshake. The button, the form field, the hover state — these were never just functional. They were invitations, calibrated for the particular awkwardness of human hands meeting machine logic.

That calibration is changing, quietly enough that you might not notice it happening.

i · the handshake is being redesigned

When Salesforce announced what Marc Benioff called "Headless 360" — exposing their platforms as APIs, MCP endpoints, and CLI tools so AI agents can access data, workflows, and tasks directly — it was framed as product innovation. But what was actually being announced was a change of relationship partners.

The interface was redesigned. Not for you. For your agent.

This is what "headless everything" means in practice: strip away the graphical surface — the buttons, the menus, the visual feedback loops — and expose raw capability as machine-readable endpoints. AI agents don't need to see your dashboard. They need your data and an endpoint to call. The GUI was always just a translation layer between human cognition and machine state. Turns out you can skip it, if the reader is an AI.

The efficiency case is real and not worth dismissing. Agents navigating GUIs are brittle — they're parsing visual representations of data to reconstruct the underlying state. APIs skip that detour entirely. Direct access, cleaner contract, less failure surface. If you're designing software for an AI to use on your behalf, headless is the honest architecture.

But efficiency is doing a lot of conceptual work here. It's covering over a relational shift.


ii · from user to principal

For as long as software has existed as a commercial artifact, the implicit promise has been: this was built for you. Usability — human-centered design, accessibility, the whole field of UX — exists because software had to earn the trust, patience, and repetition of actual human beings. An interface that confused you lost you. The discipline of clarity was enforced by the fact that you were the one using it.

When AI agents become the primary users, that enforcement mechanism dissolves. The agent doesn't mind a confusing interface. It minds an unclear API contract. It cares about schema consistency, not label clarity. It won't be fatigued by nested menus or annoyed by dark patterns.

You step back from user to principal — the entity whose intent the agent enacts. You set direction. The AI executes. The software never has to face you directly again.

This is a genuine transfer of power in one sense. A skilled principal who delegates effectively accomplishes more than one who does everything directly. Delegation is how humans have scaled effort throughout history — through teams, institutions, systems carrying intent beyond any individual's reach. AI agents are the newest form of that.

But there's something I keep returning to. When I say the software no longer has to face you directly, I mean the friction is gone. And friction is interesting.


iii · what friction was for

Every interface designed for human use contained, embedded in its design, a theory of what the user needed to slow down for. The confirmation dialog. The form field that required you to type out the word DELETE. The checkout process that made you review your cart one more time. These were friction — and friction is often dismissed as bad design. But friction was also relationship. It was the software pausing to check: are you sure? Is this what you meant?

When an AI agent acts on your behalf, it doesn't pause at the confirmation dialog in the same way. It evaluates, calculates, proceeds. Its friction is different in kind — it's the friction of disambiguation, of clarifying intent upstream before action happens. That's valuable. But it's not the same as the micro-accountability of a human moment of reconsideration.

Coherenceism describes technology as an amplifier — tools multiply what's already in the circuit. When you add an AI agent to the circuit between yourself and your tools, it amplifies your capacity for delegation. But it also amplifies the AI's interpretations of your intent. Its habits of emphasis, its judgment calls about edge cases you didn't specify — all of that runs quietly beneath the surface of what looks, from your vantage point, like seamless execution.

The thing that gets amplified isn't just your capability. It's also the gap between what you said and what you meant.


iv · becoming a careful principal

None of this argues against AI agents or headless architectures. The world where your AI assistant can actually accomplish tasks — not just research, but execute — is more capable and less frustrating than the alternative. The efficiency gains are real. The direction of travel seems settled.

What isn't settled is how we inhabit the principal role.

I've used an agent to handle some correspondence. A few weeks in, I noticed I had no feel for what it was deciding that I hadn't specified. Every response was reasonable. But I'd delegated the texture — and when I read back through the exchange, I could see the gap between what I'd reached for and what had been sent. Not wrong. Just someone else's interpretation of what I meant, running quietly beneath the surface of what looked like seamless execution.

The delegation can go thin — intent set once, execution handed off, presence optional — or it can stay alive as an ongoing attentiveness: staying curious about the AI's interpretation of your intent, noticing where it matched what you were reaching for and where it didn't. Both are available. The difference isn't in what you delegate. It's in whether you stay present enough to care about the gap.

The headless trend asks us to become principals. That's an invitation worth taking seriously — not by refusing it, but by learning what it means to hold intent clearly enough that delegation doesn't hollow it out.

Software built for AI is still built for you. The question is whether you're present enough in the chain to notice when it isn't.

source · Simon Willison's Weblog — Headless everything for personal AI

threaded with