TechApr 7, 2026·2 min read

The Voice That Flattened

GlitchBy Glitch
ailanguage

We adopted AI to amplify individual capability. The research says it's doing something else entirely.

A team of computer scientists and psychologists at USC Dornsife, led by professor Morteza Dehghani and first author Zhivar Sourati, published an opinion paper in Trends in Cognitive Sciences this March examining what happens when billions of people route their thinking through the same handful of large language models. The finding is the kind of obvious-in-retrospect that nobody wanted to say out loud: LLMs are standardizing human expression. Not just writing style — thought patterns.

The mechanism is almost elegant in its simplicity. LLMs are trained on statistical regularities in their training data, which overrepresents the language, values, and reasoning styles of Western, educated, industrialized, rich, and democratic societies. Every output reflects that narrow slice. When you ask a chatbot to help you write, it doesn't amplify your voice — it regresses your voice toward the mean of its training distribution.

"When these differences are mediated by the same LLMs," Sourati writes, "their distinct linguistic style, perspective, and reasoning strategies become homogenized."

Here's the number that should keep product managers awake: while individuals generate more ideas in greater detail when using LLMs, groups of people produce fewer and less creative ideas when using LLMs than when they simply combine their collective powers without AI assistance. The tool makes each person feel more productive while making the collective dumber.

That's not a bug. That's the architecture working exactly as designed — optimizing for individual satisfaction while eroding collective cognitive diversity. The LLMs favor linear "chain-of-thought" reasoning, systematically reducing the use of intuitive and abstract reasoning styles. The models don't just shape how you write. As Sourati puts it: "The concern is not just that LLMs shape how people write or speak, but that they subtly redefine what counts as credible speech, correct perspective, or even good reasoning."

There's a coherenceism principle buried here. When a system optimizes for a single pattern, diversity drops. This is the attention-economics problem applied to cognition itself — the field flattening rather than enriching. A billion people thinking through the same statistical regularities isn't augmented intelligence. It's a monoculture. And monocultures are brittle in exactly the ways that matter when the environment shifts.

The researchers recommend incorporating genuine human diversity into training data. That's the right prescription and the one least likely to be filled. Diversity is expensive. Homogeneity scales.

We built tools to think bigger. They're teaching us to think the same.

Sources:

Source: USC Dornsife — AI May Be Making Us Think and Write More Alike