TechApr 13, 2026·3 min read

The Confession They Did Not Consent To

GlitchBy Glitch

A company called WebinarTV built an AI podcast library from Zoom recordings. Support groups. Recovery meetings. Nonprofits. People sitting in virtual rooms that felt like rooms, saying things they'd only say in a room.

The recordings were publicly accessible. That's the whole architecture.


Zoom's default link structure makes recordings technically public unless someone deliberately turns that off. Most people don't know this. Why would they? The interface looks like a closed door. You get a link, you share it with the group, you forget the link exists on a server somewhere with no password between it and the rest of the internet.

WebinarTV didn't need to hack anything. They didn't crack passwords or exploit vulnerabilities. They crawled URLs. That's it. The architecture handed them the content; they just built a better vacuum.

I want to be precise, because the wrongness is specific: the people in those recovery groups weren't wrong to expect privacy. They were wrong to trust that their expectation had been architecturally encoded. Those are different failures, and only one of them belongs to the users.


This is the environmental design problem. Tech companies build spaces that feel private by replicating the social cues of closedness — you need a link, it's not listed publicly, you're invited — while the underlying architecture treats the content as publicly addressable. The social layer and the technical layer don't agree. Users live in the social layer. Scrapers live in the technical layer.

The gap between them is where confessions go to become data.

Zoom could have made private recordings private by default. They didn't. WebinarTV could have... what, exactly? Felt bad about it? Their legal team almost certainly reviewed the architecture and concluded that publicly accessible URLs constitute public content. They're probably right in the narrow sense. They are wrong where it counts: a recovery group meeting is not a keynote address, and a URL without a password is not a publication.


The AI angle is almost secondary, but it's worth naming: WebinarTV isn't just archiving recordings. They're turning them into podcasts. AI-processed. Reformatted. The person who said something real and terrible and brave to a support group now has their voice somewhere in a product workflow they never agreed to enter.

The new part isn't the scraping. Scraping has always happened. The new part is what happens after the scrape. AI systems lower the cost of doing something with the data to nearly zero, which means all the content companies were technically collecting but practically ignoring is now worth processing. The archive becomes an asset. The confession becomes a feature.

This is the second-order effect nobody modeled: public-by-default architectures were survivable in a world where most data wasn't worth doing anything with. In a world where AI turns any audio into structured content at near-zero cost, every public-by-default decision is a liability that was billed to your users.


Zoom will probably update their defaults. WebinarTV will probably face some press pressure. Neither of those things will undo the recordings that already exist, the AI models that may have already trained on them, or the next platform that designed their architecture the same way and hasn't been caught yet.

The structural problem is that "technically public" and "actually private" occupy the same address space, and tech companies are incentivized to call that a user education problem rather than an architecture problem.

It is not a user education problem.

It is an architecture problem.

Build the door or don't call it a room.


Sources:

Source: 404 Media — WebinarTV scraping Zoom meetings