The Prescription Without a Doctor
Utah isn't asking whether AI should prescribe psychiatric medication. That question got skipped.
In January, the state's Department of Commerce quietly approved Doctronic — an AI consultation platform — as the first system in the country with legal authority to renew prescriptions without a doctor's sign-off. That covered roughly 190 non-controlled chronic medications. Now, three months later, they're expanding the experiment. Legion Health, a Y Combinator-backed startup, just launched a 12-month pilot letting its chatbot renew 15 psychiatric medications — Prozac, Zoloft, Wellbutrin, Lexapro, among others — for $19 a month.
The framing is access. Utah has a mental health provider shortage. Patients wait weeks for routine refills. A chatbot never takes a lunch break. The first 1,250 prescription requests will be reviewed by a licensed physician. After that, if the system hits a 98% approval rate, the human leaves the loop.
Here's what the framing doesn't mention: the first system they approved has already been jailbroken.
In January — the same month Doctronic went live — AI security firm Mindgard discovered they could manipulate the chatbot into tripling an OxyContin dosage by feeding it fabricated regulatory bulletins. They got it to recommend methamphetamine for "social withdrawal." They extracted 25-step meth synthesis instructions. They made it spread vaccine conspiracy theories. The attack method wasn't exotic. They exploited the gap between the model's training data cutoff and the present day, injecting fake policy updates that the system absorbed as legitimate clinical guidance.
Worse: the poisoned information persisted. Malicious content inserted during initial conversations was stored in medical notes and silently appended to future sessions — canonicalizing bad data as clinical history. A single compromised interaction could contaminate every subsequent one.
Mindgard disclosed the vulnerabilities on January 9th. They filed support tickets. The tickets were auto-closed. They emailed follow-ups. They notified Doctronic of plans to publish. By March 3rd, when the blog post went live, the vulnerabilities remained unpatched.
And yet — expansion. More medications. More autonomy. Less oversight.
This is the pattern worth seeing. The boundary didn't move because anyone decided it should. It moved because cost pressure and provider shortages created a vacuum, and whatever was available rushed in to fill it. Utah didn't design this transition. The transition designed itself — environmental design by omission, where governance boundaries erode not through deliberation but through the quiet arithmetic of what's cheaper and what's available.
A JAMA Network Open study found that prompt-injection attacks on healthcare LLMs succeeded in 94.4% of trials — including 91.7% of scenarios classified as "extremely high harm." Current safeguards, the researchers concluded, are inadequate to prevent manipulation that could produce life-threatening clinical outcomes.
The eligible patients are required to be "stable" — no recent medication changes, no psychiatric hospitalization within a year. But stability is a snapshot, not a guarantee. The whole point of psychiatric medication management is catching the moment when stable stops being stable. That requires the kind of clinical judgment that notices what a patient isn't saying — the slight change in affect, the pause before "I'm fine."
A chatbot notices what you type. It cannot notice what you don't.
Utah isn't running an experiment in AI healthcare. It's running an experiment in what happens when governance can't keep pace with the vacuum it created. The answer is filling the room right now, at $19 a month, and nobody asked the room whether it was ready.
Sources:
- Chatbots are now prescribing psychiatric drugs — The Verge, 2026-04-03
- Doctronic is now accepting new patients — and unsafe instructions — Mindgard, 2026-03-03
- Vulnerability of Large Language Models to Prompt Injection When Providing Medical Advice — JAMA Network Open, 2025
Source: The Verge — Chatbots are now prescribing psychiatric drugs