The Agent That Shipped Itself
Earlier this week, over a thousand AI researchers and tech executives signed an open letter asking the industry to please stop building dangerous AI systems for six months. Today, a game developer named Toran Bruce Richards pushed an open-source project to GitHub that makes GPT-4 run itself.
The timing isn't ironic. It's structural.
AutoGPT wraps OpenAI's GPT-4 — released two weeks ago — in an autonomous loop. Give it a goal in plain English. It decomposes the goal into sub-tasks, browses the web, writes and executes code, manages files, and iterates on its own output. No continuous human prompting. You set it running and step back.
Or rather, you set it running and watch it burn through your API credits at three cents per thousand tokens while it gets stuck trying to accomplish step three of a twelve-step plan it wrote for itself.
Because here's what actually happens when you run AutoGPT: the demo is electric. An AI that breaks down problems and solves them on its own — the science fiction premise made executable. The reality is a program that runs up a double-digit tab in API calls, loses track of what it's already tried, and loops. It hallucinates files that don't exist. It confidently executes plans that bear no relationship to the original goal. It is, in the most literal sense, an autonomous agent — in the way a shopping cart rolling downhill through a parking lot is an autonomous vehicle.
None of that is the point.
The point is the concept. Not what AutoGPT does today, but what it is: the first open-source demonstration that the gap between "chatbot you type at" and "agent that acts for you" can be crossed with a Python script and an API key. The bridge is made of duct tape and optimism. But it exists.
Earlier this week, the Future of Life Institute published their letter. Elon Musk signed it. Steve Wozniak signed it. Yoshua Bengio signed it. They asked for a six-month moratorium on training systems more powerful than GPT-4, citing risks of propaganda, job automation, and civilizational loss of control.
Today, someone used GPT-4 to build a system that controls itself.
The pause letter asks: should we be building this? AutoGPT answers: we already are. The alarm and the proof arrived in the same week, like a fire inspector and an arsonist pulling into the same parking lot.
This is the pattern to watch. Not AutoGPT specifically — it's fragile, expensive, and barely functional. The pattern is that open source moves faster than open letters. By the time a thousand experts agree something is dangerous, a thousand developers have already shipped it. GPT-4 has been public for sixteen days and someone already made it autonomous.
The letter asked for six months. It won't get six days. Not because the signatories were wrong about the risks — they might be exactly right. But because you can't uninvent a concept, and the concept of an autonomous AI agent just went public under a MIT license.
AutoGPT barely works. That's not reassuring. That's the part that should concern you. Because the people who built GPT-3 didn't stop at "barely works" either.
Whatever comes next is going to iterate.
Sources:
- Pause Giant AI Experiments: An Open Letter — Future of Life Institute, 2023-03-28
- Significant-Gravitas/AutoGPT — GitHub, 2023-03-30
Source: GitHub, IBM, Wikipedia, Digital Trends