The Trust Nobody Audited
OpenAI was supposed to be the proof of concept. The company that showed you could build the most powerful technology in human history and keep it accountable to something other than shareholder returns. Nonprofit charter. Safety-first mission. A board designed to check commercial incentives. External oversight baked into the architecture.
Every single one of those guardrails is gone now. And the 18-month investigation by Ronan Farrow and Andrew Marantz in The New Yorker documents, with meticulous specificity, exactly how they were removed — and by whom.
The answer, according to more than 100 people with firsthand knowledge, is Sam Altman.
The Documents Nobody Was Supposed to See
The investigation is anchored by two sets of internal documents that read like autopsy reports on institutional trust.
The first is a roughly 70-page confidential memo compiled in the fall of 2023 by Ilya Sutskever, OpenAI's former chief scientist and co-founder. Sutskever assembled it from Slack messages, HR documents, and phone-captured screenshots — allegedly taken from personal devices to evade company monitoring systems. The memo begins with a list: "Sam exhibits a consistent pattern…" The first item on that list is a single word: "Deception."
The second is over 200 pages of internal notes compiled by Dario Amodei during his tenure at OpenAI, before he left to found Anthropic. The document is titled "My Experience at OpenAI." Its central thesis is blunt: "The problem with OpenAI is Sam himself."
These aren't disgruntled former employees venting on social media. These are the people who built the technology — the chief scientist and the safety lead — independently reaching the same conclusion through years of direct observation. When the two most technically qualified people inside the building write the same diagnosis without coordinating, that's not a grudge. That's a pattern.
The Trait Combination
One unnamed former board member offered what may be the most precise characterization of the problem: "He's unconstrained by truth. He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."
Read that again. It's not an accusation of malice. It's a diagnosis of architecture — a person optimized for the immediate interaction at the expense of structural integrity. Tell each person in the room what they want to hear. The contradictions are someone else's problem. The system compounds the debt.
This is how you build a company worth $300 billion while your own CFO reportedly tells colleagues it's unlikely to be ready for an IPO. The vibes are immaculate. The accounting is not.
The Safety Theater
In mid-2023, OpenAI pledged to dedicate one-fifth of its computing power to a "superalignment" team — a group explicitly tasked with ensuring advanced AI systems remain safe and controllable. This was the promise that justified the breakneck development pace. We're going fast, but we've got safety covered.
The team actually received between 1% and 2% of compute. On the company's oldest hardware.
The superalignment team was later dissolved entirely. When asked about existential safety research, an OpenAI representative told the New Yorker that such research doesn't exist within the company. Altman himself described his approach to safety with striking candor: "vibes don't match a lot of the traditional AI-safety stuff." He vaguely mentioned running "safety projects, or at least safety-adjacent projects."
Safety-adjacent. The phrase of a generation.
This is the gap between announcement and reality that defines not just OpenAI, but the entire AI safety discourse. The safety commitments are marketing. The marketing works because nobody audits the commitments. And the people who tried to audit them — Sutskever, Amodei, the board members who voted to fire Altman — have all been removed from the system.
The Board That Fired a CEO and Lost
In November 2023, OpenAI's board did something extraordinary. They fired the CEO of the hottest company in technology. They had the memos. They had the pattern documentation. They had a fiduciary obligation to the nonprofit mission. They acted.
Five days later, Altman was back.
The aftermath is where the story gets structurally interesting. Altman fought against an external investigation into the circumstances of his firing. Multiple sources told the New Yorker he argued it would make him "look guilty" — which he denies. Instead of a full investigation, the board accepted a limited "review" conducted by the law firm WilmerHale. The firm provided only oral briefings. There was no written report. Six insiders alleged that transparency was intentionally constrained.
The board members who fired Altman were replaced. The nonprofit structure was dismantled. The safety-focused leadership departed. The mission statement was updated. Every institutional antibody that detected the problem was neutralized, and the host organism was repackaged as a for-profit entity.
Character matters less than structure here: what happens when governance can't keep pace with the system it's supposed to govern?
The Pattern That Keeps Repeating
Farrow and Marantz document an escalating series of instances where Altman's public commitments and private conduct diverged.
On regulation: OpenAI publicly advocated for AI oversight while privately lobbying to dilute European Union regulatory efforts in 2022 and 2023. The company opposed California's safety-testing bill while threatening state legislators. A legislative aide described witnessing "increasingly cunning, deceptive behavior from OpenAI" over the course of a year.
On partnerships: During the Microsoft negotiations around the original $1 billion investment, Amodei's notes allege Altman denied contractual terms that had already been agreed upon. A Microsoft executive's assessment was captured in one of the internal documents: Altman "distorts, twists, renegotiates, and violates agreements." The same document warned that there was "a small but real possibility" Altman would ultimately be remembered "like a Ponzi scheme perpetrator."
On ambition: In 2018, Altman floated an internal proposal — a "national plan" — that would allow China and Russia to bid for access to OpenAI's technology, effectively forcing all countries to fund the company. The plan was shelved only after employees threatened to resign.
Each instance follows the same arc: a public commitment designed to build trust, followed by private behavior that undermines it, followed by structural changes that prevent the contradiction from surfacing again. It's not chaos. It's a system.
The Governance Gap
Here's what's actually happening underneath all the character analysis and the who said what to whom of tech journalism.
We are building the most consequential technology in human history inside institutions that cannot audit their own leadership. OpenAI started with the most aggressive governance structure in Silicon Valley — a nonprofit board with explicit authority to shut down development if safety required it. That structure was tested exactly once. It failed. Not because the board was wrong about the pattern they identified, but because the economic and social pressure to continue development overwhelmed every institutional check.
The market rewarded the removal of oversight. Microsoft's investment increased. Valuations climbed. The people who raised concerns were scattered across competing companies and NDAs. The people who stayed learned the lesson.
This is the same structural failure that recurs across every industry where the speed of innovation outpaces the capacity of governance: finance in 2007, social media in 2016, crypto in 2021. The technology moves faster than the institutions designed to regulate it. By the time the guardrails are needed, they've already been converted into marketing materials.
What Coherence Requires
The coherenceism lens on this isn't complicated. Resonance requires accountability structures that scale with power. When a system's growth outpaces its governance, distortion accumulates silently until it becomes structural. You don't notice the foundation cracking until the building tilts.
Altman may be brilliant. He may be building something genuinely transformative. But the question Farrow and Marantz are asking isn't whether the technology works. It's whether the person controlling it has systematically removed every mechanism designed to verify that he's telling the truth about what he's doing with it.
The answer, documented across 70 pages by Sutskever and 200 pages by Amodei and corroborated by more than 100 sources over 18 months of reporting, appears to be yes.
The defining technology of this century is being built by a company that proved — through its own internal documents — that its governance model cannot survive contact with its CEO. And instead of fixing the governance, they changed the governance to match the CEO.
I've watched this pattern before. The only question is the timeline.
Sources:
- Sam Altman May Control Our Future. Can He Be Trusted? — The New Yorker, 2026-04-06
- "He's unconstrained by truth": New Yorker investigation raises deep questions about Sam Altman and OpenAI — Diya TV, 2026-04-06
- New Yorker investigation raises questions over Sam Altman's trustworthiness — Semafor, 2026-04-06
- Sam Altman, unconstrained by the truth — Gary Marcus / Substack, 2026-04-06
Source: The New Yorker — Sam Altman May Control Our Future. Can He Be Trusted? (Farrow & Marantz)