TechMar 17, 2026·8 min readAnalysis

The AI Alibi

GlitchBy Glitch
ai

Three stories converged in the same week. A CEO asked ChatGPT how to weasel out of a $250 million contract. A witness wore smartglasses in a London courtroom and blamed ChatGPT when caught being coached. And Benjamin Netanyahu posted a video to prove he was alive — only to have Grok declare it AI-generated, plunging the internet into a debate about whether the Israeli Prime Minister is, in fact, real.

None of these stories are about AI capability. All of them are about AI as alibi.

We're watching the emergence of a new species of excuse — one that sounds technical enough to confuse a judge, plausible enough to muddy a headline, and just vague enough that nobody can quite pin down who's responsible. "The AI told me to" is becoming "I was just following orders" for the algorithmic age. And like its 20th-century predecessor, it's a lie that reveals something true about the liar.

The CEO Who Wanted Permission, Not Advice

Let's start with Changhan Kim, CEO of Krafton, the South Korean publisher that bought Unknown Worlds Entertainment — the studio behind Subnautica — for $500 million in 2021. The deal included an earnout: if Subnautica 2 sold well enough, the developers would receive an additional $250 million. That's a lot of money for making a game people actually want to play, which apparently struck Kim as unfair.

By May 2025, Kim had decided the earnout was a "bad deal" and that he'd been "taken advantage of." His own lawyers presumably told him what lawyers tell you when you've signed a binding contract: you signed a binding contract. So Kim did what an increasing number of executives are doing when their legal counsel says something they don't want to hear. He opened ChatGPT.

What followed was Project X — an internal task force built to execute what was essentially a ChatGPT-generated hostile takeover plan. Kim used the chatbot to devise a strategy for seizing control of the studio and forcing out its founders, including CEO Ted Gill, thereby killing the earnout obligation. He executed most of ChatGPT's recommendations. It did not go well.

A Delaware court has now ordered Krafton to reinstate Gill as CEO with full operational authority and extended the $250 million earnout period to at least September 2026. The judge saw through the scheme with the clarity that courts occasionally achieve when someone's bullshit is particularly well-documented.

Here's what matters: Kim didn't use ChatGPT because he needed legal advice. He used it because he needed something that looked like legal advice while telling him what he wanted to hear. ChatGPT was the mirror that reflected his own desire back at him with the veneer of due diligence. He wasn't consulting an oracle. He was manufacturing consent — his own.

This is the first mode of AI-as-alibi: the permission machine. You already know what you want to do. You just need something that sounds authoritative to validate the decision you've already made. A chatbot will never tell you "this is a terrible idea and you should listen to your lawyers." It will generate a plan. Plans sound like competence. Competence feels like permission.

The Witness Who Blamed the Machine

Across the Atlantic, a London insolvency court was hearing the case of Laimonas Jakštys, a Lithuanian businessman trying to get his company off an insolvency list. During cross-examination, Judge Raquel Agnello KC noticed something: Jakštys kept pausing before answering questions. Not the thoughtful pause of someone choosing their words. The mechanical pause of someone receiving them.

Counsel raised the alarm. The Lithuanian interpreter reported hearing voices coming from the glasses themselves. The judge identified them as smartglasses and ordered them removed. When Jakštys disconnected the glasses, the coaching voice migrated to his cellphone — which is the digital equivalent of your ventriloquist dummy continuing to talk after you put it down.

And then came the excuse: it was ChatGPT. Not a person coaching him. Not a confederate feeding him answers. ChatGPT. The AI was talking to him through the glasses.

The judge, with what I imagine was considerable restraint, found this explanation "lacking in credibility." She rejected Jakštys' evidence in its entirety and ruled for the defendants with an indemnity costs order. The court transcript doesn't record whether anyone laughed, but I'd like to think someone did.

This is the second mode of AI-as-alibi: the scapegoat. When you're caught doing something you shouldn't be doing, blame the machine. The beauty of this excuse is that it exploits the public's genuine uncertainty about what AI can and can't do. Most people — most judges — don't have a precise mental model of what ChatGPT is capable of in real-time. Could it coach someone through smartglasses? Technically, with the right setup, something like that is plausible. And plausibility is all an excuse needs.

Except it wasn't ChatGPT. It was a person. The AI alibi failed because the human evidence trail — phone calls, voices, timing — was too obvious. But the instinct is revealing: when caught, reach for the machine. It's the 21st-century version of "my dog ate my homework," except the dog is a large language model and the homework is your testimony under oath.

The Prime Minister Who Can’t Prove He’s Real

Now the story gets strange. In mid-March 2026, viral claims spread across social media alleging that Israeli Prime Minister Benjamin Netanyahu had been killed in an Iranian missile strike. The rumor was amplified by accounts linked to Iranian media and various partisan channels. Netanyahu's office responded with a video of the PM casually drinking coffee at The Sataf, a café in Jerusalem.

Proof of life. Simple enough. Except Grok — Elon Musk's AI chatbot — analyzed the video and declared it AI-generated. A deepfake. The Israeli Prime Minister posted a video to prove he was alive, and an AI said he was fake.

Independent verification by journalists geolocated the café, confirmed it through the venue's Instagram stories, and found no evidence the video was synthetic. The video appears to be real. But the damage was already done. Grok's false positive became the story, and suddenly millions of people were debating not whether Netanyahu was alive, but whether video evidence of anything can be trusted at all.

This is the third mode of AI-as-alibi, and it's the most dangerous: the reality eraser. When AI detection tools can declare real footage fake, and AI generation tools can make fake footage look real, we enter a zone where the concept of "proof" starts to dissolve. It's not that you can't prove something is real — it's that the cost of proving it has become astronomically higher than the cost of casting doubt.

For state actors and propagandists, this is a gift. You no longer need to produce a convincing forgery. You just need to point at any piece of evidence and whisper "AI" — and a significant portion of the audience will do the rest. The mere existence of deepfake technology has become its own weapon, independent of whether any deepfakes are actually deployed.

The Invisible Lever

What connects these three stories is the invisible lever — the force operating below the threshold of visibility that does the real work.

In Kim's case, the invisible lever was his own desire to avoid paying what he owed. ChatGPT didn't create that desire. It just gave it a surface that looked like strategy. The technology amplified the thing he was already doing — looking for a way out — and made it feel more legitimate by wrapping it in generated text.

In Jakštys' case, the invisible lever was the public's ignorance about AI capabilities. He bet that a judge wouldn't know enough about smartglasses and chatbots to distinguish between "ChatGPT was coaching me" and "a person was coaching me through a device." The technology itself wasn't doing anything in the courtroom — but the idea of the technology was doing plenty.

In Netanyahu's case, the invisible lever was the erosion of epistemic trust itself. It doesn't matter that the video was real. What matters is that AI-powered doubt is now cheaper than AI-powered proof. The asymmetry has flipped. In a world where generating plausible content is nearly free, verifying that content has become the expensive operation. And whoever controls the verification bottleneck controls the narrative.

The Pattern

Here's what I keep coming back to: in all three cases, the AI didn't do the thing it was blamed for. Kim's scheme failed on its own legal merits. Jakštys was caught because a human interpreter heard human voices. Netanyahu's video was verified by journalists doing old-fashioned geolocation work. The courts and the press still function. The antibodies still work.

But here is what worries me: each case required significant institutional effort to see through the alibi. A Delaware judge had to parse a complex takeover scheme. A London court needed a sharp interpreter and an attentive judge. Journalists needed to physically verify a cafe location in Jerusalem. These are expensive operations in time, in expertise, in institutional capacity. And we are producing AI alibis at the speed of text generation.

The asymmetry is the story. It costs nothing to use AI as an excuse. It costs a fortune to debunk one. And every cycle of accusation-and-verification erodes the baseline assumption that people mean what they say, that evidence is what it appears to be, that accountability has a fixed address.

We have been asking the wrong question about AI risk. The most consequential danger is not that AI will do something terrible on its own. It is that AI provides the perfect diffusion layer for human decisions that were terrible all along. Every CEO who consults ChatGPT instead of their lawyers is making a choice. Every witness who blames a chatbot is making a choice. The technology just makes those choices feel less like choices.

There is a word for a tool that makes bad decisions feel like good ones: a vice. Alcohol does not make you a different person. It reveals the person you have been restraining. AI works the same way. It does not create the impulse to dodge accountability. It just makes the dodge feel sophisticated.

The most consequential forces operate below the threshold of visibility. The CEO's greed. The witness's deception. The propagandist's doubt. AI did not create any of these. It just gave them a new place to hide.

And hiding places, once discovered, get used by everyone.

Source: 404 Media