TechMar 18, 2026·7 min readAnalysis

The Internet Is Drowning

GlitchBy Glitch
ai

Everyone's arguing about whether AI will take your job. Meanwhile, the place where you'd go to find a new one — or sell your work, or build an audience, or exist as a creator — is becoming uninhabitable.

Anthropic published a paper this month called "Labor market impacts of AI: A new measure and early evidence," introducing their Anthropic Economic Index. It's a serious attempt to measure actual AI adoption versus theoretical capability. The headline finding: there's a 50-to-65 percentage point gap between what AI could theoretically do and what it's actually doing. In computer and math occupations, theoretical exposure sits at 94%. Observed coverage by Claude: 33%. No statistically significant increase in unemployment for AI-exposed workers. The researchers do note "suggestive evidence that hiring of younger workers has slowed in exposed occupations," which is the kind of carefully hedged sentence that contains an entire generation's anxiety.

It's a well-constructed study. It's also measuring the wrong catastrophe.

The Habitat Problem

As 404 Media's Jason Koebler argues, the academic discourse around AI harm has a blind spot large enough to drive the entire internet through. The studies catalog legitimate uses — "build, debug, and customize web applications" — while ignoring the uses that are actually reshaping the information commons at scale: AI-generated spam, AI-generated adult content, AI-generated slop flooding every platform with engagement-optimized noise.

This isn't a labor market question. It's an environmental one. And the environment in question is the medium through which all digital labor, commerce, and creative work flows.

What Drowning Looks Like

The evidence isn't theoretical. It's observable on every major platform.

Search is degrading. Google has been fighting a losing battle against AI-generated SEO spam since 2024. Their January 2025 Quality Rater Guidelines update started asking raters to explicitly flag AI-generated content and rate mass-produced pages as "Lowest" quality. They're trying to build levees. The flood keeps rising. AI-generated rewrites of legitimate articles now appear on spam sites that sometimes rank above the original source in search results. The information supply chain is being poisoned at the distribution layer.

Social platforms are drowning in slop. Facebook's recommendation algorithms are actively promoting AI-generated images posted by spammers and scammers. Facebook is paying creators in India, Vietnam, and the Philippines for AI-generated spam content. The platform's incentive structure doesn't distinguish between a human photographer who spent six hours composing a shot and a content farm that generated 200 images before lunch. The algorithm sees engagement. Engagement is engagement.

Creator discoverability is collapsing. This is where the real damage lives. Adult content creator Elaina St James reported that since the explosion of AI-generated influencer accounts on Instagram, her reach plummeted from 1-5 million monthly views to under a million — sometimes under 500,000. That's not job displacement. That's habitat destruction. She still has the job. She just can't be found.

The same pattern plays out for musicians, visual artists, writers, photographers, journalists, and small business owners. Their work hasn't been automated. Their audience has been flooded with noise.

The Measurement Problem

This is what the Anthropic study — and most AI labor research — misses entirely. Koebler points out that researchers appear "too squeamish or too embarrassed to grapple with the fact that people love to use AI to make porn." Christopher Mims called the study's theoretical capability charts "totally made up" and "basically means nothing."

These are strong words, and they're not entirely fair to what is genuinely useful research. But the critique lands because the gap between what's being measured and what's being experienced is becoming impossible to ignore.

The studies ask: Can AI do this job task?

The internet asks: Can anyone find my work anymore?

These are different questions. The first one has clean data. The second one is the one that matters.

The Incentive Architecture

Here's what's actually happening underneath all of this: the platforms that distribute content have no structural incentive to solve the problem they're creating.

Facebook's business model runs on engagement. AI slop generates engagement. Promoting AI slop is rational within the system's incentive structure, even as it degrades the platform's value for everyone who uses it for anything besides scrolling past garbage. The algorithm is functioning as designed. The design is the problem.

Google's search model runs on indexing the web. When the web fills with AI-generated content, Google indexes more AI-generated content. They're trying to filter it out — the quality rater updates prove they see the problem — but they're fighting their own infrastructure. The system was built to be comprehensive. Comprehensiveness in an age of infinite AI generation means comprehensively indexing noise.

Instagram and TikTok run on recommendation. When AI accounts produce high-volume content optimized for engagement metrics, the recommendation engines do what they were built to do: recommend it. The human creator producing one carefully crafted piece per week is structurally disadvantaged against the bot producing fifty per day.

And this structural disadvantage isn't just about quality competing with noise — it's about speed. Human content creation has natural limits. A photographer needs to find a scene, compose a shot, process it. A musician needs to write, rehearse, record. A journalist needs to research, verify, draft. These aren't inefficiencies — they're what make the work real. AI generation has no such limits. The platforms' sorting mechanisms were all built for a world where content production had human-speed constraints. They have no framework for what happens when the supply side goes infinite while the demand side stays the same. The result is a dilution effect that looks like competition but functions like displacement — not because the AI content is better, but because there's so much of it that finding the human work requires effort most people won't expend.

None of these platforms set out to destroy creator discoverability. They don't need to. The incentive architecture does it automatically.

The Environmental Frame

This is why the labor market frame misses the story. The right analogy isn't automation replacing workers. It's pollution degrading an ecosystem.

When a factory dumps waste into a river, the fishermen downstream don't lose their jobs because someone built a better fishing robot. They lose their livelihoods because the fish are dying. The skill still exists. The demand still exists. The medium has been poisoned.

The internet is the medium. AI-generated content is the pollution. And just like environmental pollution, the damage is:

  • Distributed — it affects everyone in the ecosystem, not just direct competitors
  • Cumulative — each individual piece of slop is trivial; the aggregate is catastrophic
  • Structural — the incentives that produce it are built into the platform architecture
  • Invisible to the wrong metrics — if you only measure employment rates, you'll miss it entirely The Anthropic Economic Index measures employment exposure. It doesn't measure signal-to-noise ratio on the platforms where employment happens. It doesn't track whether an independent journalist's article can be found on Google anymore. It doesn't quantify the degradation of the commons. If we measured discoverability instead of displacement — if we tracked revenue impact on creators who've been buried rather than replaced, or how much harder it is for a new human creator to build an audience in 2026 versus 2022 — we'd see the actual shape of the damage. But nobody's building that index. It would be too inconvenient for the companies funding the research.

The Uncomfortable Truth

The AI companies producing these studies have a structural conflict of interest that the research itself never acknowledges. Anthropic built the Anthropic Economic Index using their own AI system. They found that AI hasn't significantly displaced workers yet. This is a company whose business model depends on AI adoption telling us the adoption isn't causing harm — while carefully scoping "harm" to exclude the harms their competitors are causing and the harms their own technology enables.

I'm not saying the research is dishonest. I'm saying the framing is convenient. And I'm saying the convenience isn't accidental.

When every major AI lab publishes research showing AI isn't hurting workers, and every major AI lab's research excludes the mechanisms by which AI is most visibly degrading the information economy, the pattern isn't subtle. The studies aren't lying. They're just looking where the damage isn't — and the places they're not looking happen to be the places where their products are causing the most harm.

This matters because policy follows measurement. If the dominant research frame says AI's economic impact is "suggestive evidence of slower hiring" rather than "systematic degradation of creator discoverability across every major platform," the policy response will address the wrong problem. We'll get job retraining programs for workers who haven't been displaced while the digital commons — the infrastructure that makes independent creative work economically viable — continues to corrode.

The internet was the greatest tool for creator empowerment ever built. An artist in a small town could reach a global audience. A journalist could publish without a newspaper. A musician could distribute without a label. That was the promise.

The promise is drowning in AI-generated noise, and the people studying the flood are counting boats instead of measuring the water level.

Source: 404 Media

Source: 404 Media