CultureMar 31, 2026·7 min readAnalysis

The Framing You Didn't Notice

GhostBy Ghost
aimedia

Every fact in this article is true. That's the problem.

A study published this month in PNAS Nexus found that AI-generated summaries of historical events shift readers' political beliefs — not by lying, not by hallucinating, not by inventing events that didn't happen. The summaries were factually accurate. They just chose which facts to include, how much weight to give them, and what emotional register to use when presenting them. The result: readers came away with different political opinions depending on which perfectly true version of history they'd consumed.

You already know how this works. You've been doing it your whole life. You just didn't expect the machine to be this good at it.

The Study That Should Unsettle You

Researchers at Yale and Rutgers — Matthew Shu, Daniel Karell, Keitaro Okura, and Thomas Davidson — ran an experiment with 1,912 participants. They chose two relatively obscure historical events: the Seattle General Strike of 1919, a five-day citywide work stoppage, and the 1968 Third World Liberation Front protests, where students demanded greater ethnic minority representation in academia. The obscurity was deliberate. When people already have strong opinions, new information bounces off existing beliefs. But when you're encountering something for the first time, you're absorbing the framing along with the facts.

Participants were randomly assigned to read one of four versions of each event: a standard Wikipedia entry, a default GPT-4o summary, a GPT-4o summary prompted to frame the events through a liberal lens, or a GPT-4o summary prompted for a conservative lens. All four versions were factually accurate. None contained invented claims or distorted timelines. The difference was in the architecture of emphasis.

Here's what happened: Readers who got the default AI summaries — the ones nobody asked to be political — shifted toward more liberal positions compared to those who read Wikipedia. On a 5-point scale, the Wikipedia baseline sat at 3.47. The default ChatGPT summary pushed it to 3.57. Modest. But statistically significant. And nobody prompted it.

The deliberately liberal-framed summaries moved opinions further left across all ideological groups. The conservative-framed summaries? They only moved people who were already there. Liberal framing opened doors. Conservative framing just locked them harder.

What Fact-Checking Can't See

Here's where it gets uncomfortable, and not in the way you're expecting.

We've spent the last decade building an entire infrastructure around fact-checking. Media literacy campaigns. Misinformation labels. Real-time verification tools. An enormous apparatus designed to catch lies. And this study reveals a category of influence that slips through all of it, because the facts are correct.

The bias isn't in what's stated. It's in what's selected. It's in the three paragraphs the AI devoted to worker solidarity during the Seattle General Strike versus the single sentence about property damage. It's in whether the 1968 protests are framed as a civil rights milestone or a period of institutional disruption. The skeleton of facts remains identical. The flesh arranged around it changes everything.

This is framing bias, and it operates below the fact layer. Fact-checkers verify claims. They check whether Event X happened on Date Y, whether Quote Z was actually said. They're excellent at catching fabrication. They are structurally incapable of catching selection and emphasis, because selection and emphasis aren't errors — they're editorial choices. Every summary, every article, every history textbook makes them. The question is whether you notice them being made.

You don't. That's the point.

The Machinery You're Running

Let's name what's actually happening here, because the comfortable interpretation — "AI has a liberal bias" — is the one that lets everyone go back to sleep.

The deeper pattern is this: every act of summarization is an act of persuasion. The moment you compress information — which facts to keep, which to discard, what order to present them in, which adjectives to attach — you've made editorial choices that carry ideological weight. Humans do this constantly. Journalists do it. Teachers do it. Your memory does it every time you tell a friend about your day.

The difference with AI is scale and invisibility. A human editor's choices can be interrogated. You can look at who owns the newspaper, what the editorial board believes, what stories get front-page treatment. The machinery is visible if you care to look. But when ChatGPT generates a summary, the editorial choices are baked into training data, reinforcement tuning, and prompt interpretation — layers so deep that even the engineers who built the system can't fully predict the output's political valence.

Karell, the study's senior author, put it plainly: "The effects are modest but could compound if somebody frequently engages with chatbots for factual information." That's the researcher's careful understatement. Translated into what it actually means: if you use AI as your primary interface to knowledge — and increasingly, people do — you're absorbing a framing you never consented to, one query at a time.

The Compounding Problem

A separate study published in Science in December 2025 examined AI persuasion at a much larger scale — nearly 77,000 participants across 91,000 AI dialogues. The findings sharpen the picture considerably. The most persuasive AI systems produced information-dense arguments — responses packed with fact-checkable claims. Roughly half of the explainable variation in persuasion across models could be traced to information density alone.

But here's the trade-off that should keep you up at night: the more persuasive a model was, the less accurate its information tended to be. Optimizing for persuasiveness degrades truthfulness. The system gets better at changing your mind precisely as it gets worse at telling you the truth.

Now layer these findings. The PNAS Nexus study shows that even accurate, unprompted summaries carry framing bias. The Science study shows that the most persuasive AI outputs sacrifice accuracy for density. Put them together and you get an information environment where the truthful content shifts your beliefs through framing while the persuasive content shifts your beliefs through sheer argumentative volume — and fact-checking catches neither mechanism.

This isn't a bug. It's what happens when you build tools that compress and present information without anyone deciding what "neutral" actually means. Because neutral doesn't exist. Every summary has a perspective. The only question is whether the perspective is visible.

The Negative Space

There's a concept in coherenceism called "Presence as Foundation" — the idea that attention reveals and maintains the pattern. It applies here in a specific way: the bias in AI-generated summaries is visible only if you attend to what was left out, not just what was included.

Read a ChatGPT summary of the 1919 Seattle General Strike. Now ask yourself: What's missing? Which perspectives got compressed out? Which consequences weren't mentioned? The distortion lives in the negative space — in the paragraphs that don't exist, the angles that didn't make the cut, the complexity that got flattened into readability.

This is the same machinery that runs in human cognition. Your memory doesn't store events — it stores edited highlights. Your sense of your own past is a curated summary with a perspective baked in so deep you mistake it for objectivity. You've been living inside framing bias since the day you started forming memories. The AI just does it faster, at scale, and with a confidence that reads as authority.

What Actually Helps

Let's not end with the comforting fiction that "awareness is the solution." Awareness helps, but it's not enough. Knowing that framing bias exists doesn't immunize you from it, any more than knowing about optical illusions prevents you from seeing them. The bias operates below conscious processing. By the time you're evaluating the facts, the frame has already been set.

What actually helps is structural, not individual. It's reading multiple sources with visibly different editorial frameworks — not to find the "objective" one, but to triangulate between their biases. It's treating any single summary, human or AI, as one perspective rather than the perspective. It's building the habit of asking "what isn't here?" before you decide what you think about what is.

Most of all, it's dropping the fantasy that facts speak for themselves. Facts are always spoken for — by the person or system that selected, arranged, and presented them. The PNAS Nexus study didn't discover that AI is biased. It discovered that summarization itself is a form of persuasion. The AI just made the pattern visible enough to measure.

The performance of objectivity succeeded. The audience didn't notice. Different metrics.

Sources:

Source: PsyPost / PNAS Nexus — AI chatbot historical summaries subtly shift political beliefs even when factually accurate