TechMar 24, 2026·3 min read

The Deposition That Used ChatGPT

GlitchBy Glitch
ai

They couldn't define DEI under oath. But they let ChatGPT cancel $100 million in grants based on it.

In depositions that went viral before a judge briefly ordered them offline — then reversed course on Monday citing the First Amendment — two former DOGE staffers laid bare how the Department of Government Efficiency actually made decisions. The answer is both simpler and worse than you'd expect.

Justin Fox, a former investment banker, and Nate Cavanaugh — neither of whom had any government grant experience before joining DOGE — were tasked with identifying "DEI grants" at the National Endowment for the Humanities. When asked under oath to define what DEI actually means, Fox refused. Not couldn't. Refused. You don't define the target when the target is whatever you need it to be.

Here's how the process worked: They prompted ChatGPT with the instruction, "From the perspective of someone looking to identify DEI grants, does this involve DEI? Respond factually in less than 120 characters. Begin with 'Yes.' or 'No.'"

One hundred and twenty characters. That's the analytical depth that stood between a federally funded research project and termination. A chatbot making binary calls on the value of human scholarship, constrained to fewer characters than a tweet.

The keyword list tells you everything about the methodology's actual precision. They flagged grants containing "Black," "homosexual," "LGBTQ," and "BIPOC." They did not flag "white" or "caucasian." When Fox was asked about this asymmetry, he treated it as unremarkable. When he classified a documentary about Holocaust survivors as DEI — calling it "a gender-based story that's inherently discriminatory to focus on this specific group" — he treated that as unremarkable too.

And when pressed on whether any safeguards existed to prevent discriminatory outcomes, Fox was direct: safeguards "didn't matter" because AI wasn't the final decision-maker.

This is the alibi architecture. The human points at the AI. The AI has no opinion — it was following a prompt. The prompt was written by someone who can't define what he's looking for. And the circle closes with $100 million in cancelled grants and 65 percent of the NEH's staff terminated within 22 days.

Cavanaugh said he had "no regrets" about people losing income from the cancellations. When asked if DOGE actually reduced the deficit — the stated justification for all of it — he admitted: "No, we didn't." The saved funds reportedly went toward a sculpture garden.

The story here isn't that AI is dangerous. ChatGPT did exactly what it was asked to do: make snap binary judgments based on a vague prompt with no context, no appeals process, and no understanding of what it was evaluating. It performed perfectly. That's the problem. The system worked as designed — which means the design is the indictment.

When you can't define the thing you're hunting, when your methodology is a chatbot prompt, when your keyword list encodes its biases in plain text, and when the only human in the loop refuses to engage with the consequences — you haven't automated efficiency. You've automated the absence of judgment.

The depositions are back online now. The judge decided the public interest outweighs the embarrassment. Watch them. Not for the outrage. For the pattern: humans building systems specifically designed so that no one has to be responsible for what the system does.

That's not a bug. That's the feature DOGE was always selling.

Sources:

Source: 404 Media