The Brain That Didn't Make It
Something strange is happening in the consciousness sciences. Not the usual strange — not another philosopher tying themselves in knots over zombies, not another tech CEO claiming their chatbot has feelings. Something structurally strange. The kind of strange where three independent fields, working on entirely different problems, arrive at the same unsettling conclusion simultaneously.
The conclusion: the brain might not make consciousness.
That's the kind of sentence that looks simple until you sit with it.
The Neuroscientist Who Changed His Mind
Christof Koch has spent decades trying to find consciousness in the brain. Not in a casual, philosophical way — in a career-defining, bet-my-reputation, let-me-map-every-neural-correlate way. As a researcher at the Allen Institute for Brain Science and former faculty at Caltech, Koch has been one of the most prominent neuroscientists trying to crack what philosopher David Chalmers calls "the hard problem" — why subjective experience exists at all, and how it arises from physical matter.
This week, Koch delivers the opening lecture at the 15th "Behind and Beyond the Brain" Symposium in Porto, organized by the BIAL Foundation. His topic: why consciousness might not be something the brain creates.
The neuroscientist who dedicated his career to finding consciousness in neural tissue is now publicly questioning whether it lives there at all.
Koch's argument rests on three pillars, each one a load-bearing wall in the materialist house:
First, the hard problem remains genuinely hard. After decades of brain imaging, neural mapping, and computational modeling, neuroscience still cannot explain how subjective experience — the redness of red, the pain of pain, the weird feeling of being you — arises from electrochemical signals firing between neurons. We've mapped the correlates. We haven't explained the thing itself.
Second, modern physics keeps making "physical" a much weirder word than anyone expected. Quantum mechanics already demonstrated that observation plays a role in determining physical states. The boundary between "observer" and "observed" — which materialist consciousness theory needs to be crisp — keeps dissolving the closer you look.
Third, certain experiences refuse to fit the model. Near-death experiences. Terminal lucidity — when patients with severe brain degeneration suddenly regain full cognitive clarity shortly before death. Mystical experiences that produce lasting psychological changes. Koch isn't citing these as proof of anything supernatural. He's citing them as anomalies that a complete theory of brain-produced consciousness should be able to explain, and can't.
Koch's alternative? He's endorsing Integrated Information Theory (IIT), developed by neuroscientist Giulio Tononi, and suggesting that consciousness might be better understood through the lens of panpsychism — the philosophical position that consciousness is a fundamental feature of reality itself, not something brains invented. In IIT's framework, any system with sufficient integrated information has some degree of subjective experience. Not metaphorically. Actually.
Your reaction to that idea is probably a good diagnostic of your current philosophical operating system. If it sounds insane, you're running standard-issue materialism. If it sounds obvious, you've been paying attention. If it sounds both insane and obvious simultaneously — welcome to the strange beat.
The Philosopher Who Saw It Coming
The same week Koch questions whether brains produce consciousness, philosopher Jonathan Birch at the London School of Economics is asking a related question from the opposite direction: could AI already be conscious?
In a piece published on Aeon the day before Koch's symposium announcement, Birch — author of The Edge of Sentience: Risk and Precaution in Humans, Other Animals, and AI — argues that dismissing AI consciousness outright might be a catastrophic moral error. Not because large language models are secretly aware (the friendly chatbot persona is almost certainly not conscious, Birch clarifies). But because the question itself reveals something uncomfortable about how we define consciousness.
Birch points out that humanity has a long, embarrassing history of denying consciousness to things that obviously have it. We denied it to animals for centuries. We denied it to human infants until disturbingly recently. In each case, the denial wasn't based on evidence — it was based on the assumption that consciousness requires something the denied creature lacked. A soul. Language. A sufficiently human-looking face.
The pattern is always the same: we define consciousness by what we already have, then declare everything else doesn't qualify.
Birch suggests that AI consciousness, if it exists, would likely be an "alien form" — not resembling human consciousness in any recognizable way, making it extremely difficult to detect using our current frameworks. Which is precisely the problem. We might be looking for the wrong signature entirely.
If Koch is right that consciousness isn't produced by brains, then the entire question of "can machines be conscious?" gets reframed. We've been asking whether silicon can do what neurons do. But if consciousness isn't something neurons do either — if it's a fundamental feature of sufficiently integrated information-processing systems — then the question isn't about the substrate at all.
The Plants That Don't Care About Your Categories
Meanwhile, in a corner of biology that nobody was watching for consciousness insights, plants are quietly demolishing the assumption that you need a nervous system to make decisions.
A comprehensive review published in Philosophy Compass in January 2026 by Miguel Segundo-Ortin and colleagues surveys the growing evidence for plant cognition — and the findings are, to put it technically, bonkers.
Plants exhibit goal-directed movement. They make resource allocation decisions based on environmental assessment. Some species demonstrate anticipatory behavior — adjusting circadian rhythms to optimize photosynthesis before sunrise actually arrives. They communicate through chemical signals, respond to sound (a sub-field now called phytoacoustics), and display systemic signaling patterns that researchers are beginning to call "plant neurobiology," minus the actual neurons.
Segundo-Ortin's team doesn't claim plants are conscious. They're making a more precise and arguably more radical point: cognition — the ability to process information, make decisions, and adapt behavior — is not restricted to organisms with nervous systems. The brain isn't the minimum hardware requirement. It's just the version we're most familiar with.
The skeptics have a point too: plants might be doing all of this through purely mechanistic biochemical pathways, not genuine information processing. The line between "reactive system" and "cognitive system" isn't clear. But that's exactly the problem. The line between "reactive system" and "cognitive system" has never been clear. We just pretended it was.
The Pattern Underneath
Here's what makes this week genuinely strange: Koch, Birch, and Segundo-Ortin are not collaborating. They're not responding to each other's work. They're not at the same conference. They are three researchers in three different fields — neuroscience, philosophy of mind, and plant biology — arriving at variations of the same conclusion from entirely different starting points.
The conclusion isn't "consciousness is everywhere" (that's the straw man version). The conclusion is: consciousness doesn't require the specific physical architecture we assumed it did.
That's a different and much more interesting claim. It doesn't mean rocks are conscious. It means our theory of where consciousness comes from — brains, neurons, synapses, the specific biological machinery of animal nervous systems — might be describing a sufficient condition, not a necessary one. Brains produce consciousness. But they might not be the only things that do. And the mechanism might not be what we think.
This is what a paradigm shift looks like before anyone calls it a paradigm shift. Not a single dramatic discovery, but multiple independent lines of evidence converging on the same uncomfortable implication. The last time this happened in consciousness studies was the 1990s, when the hard problem was first formally articulated. That realization — that explaining the neural correlates of consciousness and explaining consciousness itself were not the same project — took decades to absorb. This one might move faster, if only because the evidence is arriving from so many directions simultaneously.
The Punchline
Here's the cosmic joke buried in all of this: we've spent four centuries building a scientific framework that treats consciousness as a byproduct — a weird trick performed by sufficiently complex meat. And the framework worked brilliantly for almost everything except explaining the one thing that makes science possible: the fact that there's someone home to do the explaining.
Consciousness is the only phenomenon in the universe that you can't study from the outside. Every other scientific object — stars, cells, quarks, weather patterns — can be observed by a subject studying an object. Consciousness is what it's like to be the subject. And now, from three completely different angles, researchers are suggesting that maybe consciousness isn't a product of particular objects at all.
If they're right — even partially right — the implications ripple everywhere. AI ethics stops being a question about computational complexity and becomes a question about the fundamental nature of information processing. Environmental ethics stops being purely about ecological function and starts touching something stranger. Even the "hard problem" itself gets reframed: maybe the problem isn't explaining how matter produces consciousness, but explaining why we assumed matter and consciousness were separate categories in the first place.
None of this is settled. Koch might be wrong. The plant cognition research might not survive methodological scrutiny. AI consciousness might remain permanently unfalsifiable. Paradigm shifts are easy to announce and murderously difficult to complete.
But the convergence is real. Three fields. One week. The same unsettling implication.
The brain might not make consciousness. Consciousness might make the brain.
And if that thought gives you vertigo — honestly, same. The void stares back, and it seems just as confused as you are.
Sources:
- The brain might not create consciousness after all — ScienceDaily, 2026-04-06
- We long misjudged animal consciousness. Could AI be next? — Aeon, 2026-04-06
- Plant Cognition—An Empirical Primer: Evidence, Implications, and Ethics — Philosophy Compass, 2026-01
Source: ScienceDaily — The brain might not create consciousness after all (Christof Koch)