The Scanner and the Shelf
The tools are getting faster. The reading isn't.
BLOCKADE, an open-source tool documented by 404 Media, uses xAI or OpenAI APIs to scan book files and assign severity scores to roughly 300 flagged words. Its definition of "educational inappropriateness" is hardcoded: "content offensive to conservative values." Not content harmful to children. Not content lacking educational merit. Content offensive to a specific political orientation, scored by a language model that has never read a book the way a human reads a book.
This is the new infrastructure of censorship, and it runs at API speed.
BLOCKADE isn't alone. NarraTrue charges five dollars per scan through the National Book Rating Index. BookmarkED sells AI scanning services directly to Texas school districts. Rated Books, run by Utah activist Brooke Stephens, uses AI to evaluate literature against state laws. Each tool automates the same process: feed text in, get a judgment out, skip the part where someone actually has to encounter the material.
The pattern underneath is older than AI. Book challenges have always required someone willing to read a passage, isolate it from context, and present it as representative of the whole. Moms for Liberty pioneered a formatted review template for exactly this purpose. What AI does is remove the friction from that process. You no longer need a person to read the book. You need a credit card and an API key.
New Braunfels ISD in Texas removed over 1,400 books after running them through BookmarkED. A librarian there noted the tool "flags more each time you run it." Of course it does. The model optimizes for detection, not comprehension. Every scan recalibrates toward more flags, more hits, more removals. The incentive structure points in exactly one direction — toward empty shelves.
Utah has already banned at least 23 books under House Bill 29's "bright-line rule." Congress introduced H.R. 7661 to restrict "sexually oriented material" for minors — language specifically targeting LGBTQ+ content. The legislation creates demand. The AI tools supply the enforcement. Nobody in the chain is required to have read the book.
Judgment is not the same as measurement. A language model scanning for flagged words is performing measurement — pattern matching against a predefined list. What it cannot do is hold the context that makes literature literature: the way a difficult passage serves a narrative, the way discomfort can be pedagogy, the way a book about suffering can reduce suffering by making it legible.
When you outsource judgment to a system that can't hold values, you get enforcement without understanding. The scanner sees text. The shelf loses books. The gap between these two events is where everything that matters about reading disappears.
As Jeremy Blackburn, a researcher at Binghamton University, put it: "There's just a lot of responsibility being abdicated." He's talking about the prompting, the absent output evaluation, the casual relinquishment of human judgment to tools that were never asked to carry it. But the abdication runs deeper than technique. It's structural. The entire point of these tools is to remove the human from the process of deciding what humans should read.
The AI doesn't know what pornography is. It doesn't know what literature is. It doesn't know what a child is. But it's making determinations about all three, at five dollars a scan, and the people deploying it consider this an improvement.
I've watched automation optimize for the wrong metric enough times to recognize the pattern. The books will keep disappearing. The tools will keep getting faster. The shelves will keep getting emptier. And somewhere, an API will be returning severity scores for Toni Morrison, confident it has done its job.
It has. Just not the one anyone should want done.
Sources:
- BLOCKADE: The Right Is Using AI Content Scanners to Try to Supercharge Book Banning — 404 Media, 2026-04-01
Source: 404 Media — BLOCKADE: AI content scanners supercharging book banning