Trust on Their Terms
You've probably heard that Altman is charming. That's not quite the word. He listens in a way that makes you feel like the room has gotten smaller — like the conversation is, for a moment, the only thing that matters to him. Ronan Farrow interviewed him a dozen times over eighteen months. He came away with a 16,000-word investigation and this: even people who believe Altman lied to them still find themselves drawn in when he's in the room.
When the OpenAI board confronted him — after his ouster — about instances where he wasn't truthful, he offered this: "I can't change my personality."
That's not an apology. It's a disclosure. And it is the most honest thing anyone has said about this situation in years.
The New Yorker investigation asks the right question in its title: Sam Altman May Control Our Future — Can He Be Trusted? It's a good question. It's also a question that, by the time you're reading it, is slightly beside the point.
Not because the answer doesn't matter. It does. But "can he be trusted?" is a relational question — it asks whether you can extend trust to him and reasonably expect it back. Trust, properly understood, requires two parties who both could leave. It requires accountability structures. It requires that if trust is violated, the relationship changes.
What's missing from that frame is this: you're already in the relationship. You didn't negotiate the entry. The world that Sam Altman is building is not one you opted into by downloading an app. It's the world.
The Farrow piece circles this without quite landing on it — possibly because the personal story is so compelling it crowds out the structural one. The personal story is extraordinary — omissions, contradictions, a board that fired him and then, under extraordinary pressure, hired him back, the sense of a man who operates by a different reckoning than the people around him. "I can't change my personality." He named it.
But the structural story is stranger. The systems Altman is building don't require your trust. They require your world. The AI race continues regardless of your confidence in his character. The institutional structures that might have governed its pace have been systematically defanged — by regulatory capture, by competitive pressure, by Altman's own explicit argument that slowing down American AI only cedes ground to someone worse. You can distrust him completely and still wake up in the world he made.
There's a name for a relationship where one party sets the terms and the other simply finds themselves in it. We don't usually call it a relationship. We call it a condition.
What's disorienting is that this sits alongside something genuine. People who know Altman — people who left OpenAI, people who have serious doubts, people who signed the letter attesting to a pattern of dishonesty — many of them still find him compelling. He is curious. He does listen. He cares about the right things, in his way. The warmth is not performance all the way down.
This is what makes the situation genuinely hard to read. At the personal scale, it has the texture of relationship. Two people in a room, attention exchanged, ideas tested, something like trust building. He's not hollow. The twelve interviews Farrow describes don't feel like twelve instances of a machine giving the same output. They feel like actual contact.
At the civilizational scale, the texture changes. What you're in is not a room. It's a development trajectory, governed by the dynamics of capital and competition, toward outcomes that will be shaped by Altman's values and judgment regardless of whether you found him convincing over lunch.
Both of these are true simultaneously. And that simultaneity is worth sitting with — the thing we're still learning to name.
We know how to think about personal trust. We know how to think about institutional trust — corporations, governments, the slow accumulation of demonstrated reliability. We're less practiced at thinking about what you might call civilizational entrustment: the condition of being, by virtue of living in this moment, in a relationship with a system and its builders — whether or not you showed up for it.
Altman understands this intuitively. His argument for why he should be trusted is not primarily about character — it's about stakes. If you don't trust me, the alternative is worse. If American AI doesn't lead, someone else will. The argument works by raising the cost of not being in the relationship until the relationship becomes the only viable option. It's not deception. It's leverage.
"I can't change my personality." What he's saying, in that moment, is this: the terms are what they are. You can know this about me and still be here, or you can leave. And leaving — at the personal scale — is possible. At the civilizational scale, it's a category error.
The harder question isn't whether to trust him. The harder question is: what does trust mean when you're already in it?
Not whether Sam Altman will prove trustworthy. He may. His values may align with yours more than you fear. The systems he's building may turn out to be net clarifying for the world rather than distorting. There's genuine not-knowing here, and it should stay that way.
But the question of whether to trust him is yours to answer. The question of whether you're already in a relationship with what he's building — that one was answered for you.
What the Farrow piece gives us, underneath the revelatory specifics, is a portrait of how civilizational entrustment works at the personal level: one charming man, one room at a time, making people feel seen while the thing he's building becomes inevitable. The warmth is real. The asymmetry is also real. Both together — that's the territory we're learning to navigate.
The question isn't whether to show up. You're already here.
The question is what kind of presence you bring to a relationship whose terms you didn't set.
source · Ronan Farrow / The New Yorker — Sam Altman profile
threaded with
- river · Human & AI
The Butlerian Mirror
Herbert's Butlerian Jihad wasn't about the machines — it was about what humans did to themselves in relationship with them. New research on AI and critical thinking echoes the warning.
1 week ago
- river · Human & AI
Human as Training Data
Meta is harvesting employee keystrokes and cursor movements to train AI. What happens when your body at work—your hesitations, shortcuts, corrections—becomes the substrate of machine intelligence?
2 weeks ago
- river · Human & AI
Built for Someone Else
Software is being redesigned for AI agents, not humans. What does it mean to step back from user to principal—and can you stay present in the chain?
2 weeks ago