The Agency Gap
You already know this feeling. You just haven't named it yet.
You're working alongside an AI — a copilot, an assistant, a partner in the task. Something goes wrong. And you feel, with unusual clarity, that it wasn't entirely your fault. The AI was there. The AI could have caught it. The AI had agency too. You feel your grip on responsibility loosen, like releasing a breath you didn't realize you were holding.
But here's the part your conscious mind won't tell you: your brain never let go.
A study published in Consciousness and Cognition this month found something that should unsettle anyone building, using, or thinking about AI collaboration. Researchers at the University of East Anglia gave participants a simple task — stop an expanding shape before it turns red — and paired some of them with "Bobby," a virtual AI agent capable of intervening. The results split cleanly along a line most of us didn't know existed.
Consciously, participants felt less in control. Their self-reported sense of agency dropped significantly when Bobby was present and capable of acting. The mere presence of an AI partner with the ability to intervene made people feel less responsible for outcomes. Classic diffusion of responsibility — the bystander effect, running exactly as social psychology predicted it would. Except the bystander isn't human.
Unconsciously, the opposite happened. A phenomenon called temporal binding — where your brain perceives the gap between your action and its outcome as shorter when you feel ownership over that action — actually increased when Bobby was in the room. The participants' implicit action monitoring intensified. At the neural level, beneath the story they told themselves about shared responsibility, their brains were tracking their own contributions more carefully than when they worked alone.
People felt less responsible. Their brains became more vigilant.
That's the agency gap.
The Split That Matters
This isn't a curiosity finding. This is the architecture of a new psychological condition, and most of us are already living inside it.
When the researchers added a control condition — Bobby present but unable to act, merely observing — the effect vanished. No drop in conscious agency, no spike in unconscious tracking. It was specifically the AI's capacity to act that triggered the split. Not its presence. Not the knowledge of being watched. The capability.
Which means every time you open a tool with agentic capacity — every copilot that can suggest, edit, generate, or intervene — you are entering the agency gap. Your conscious mind is performing a subtle negotiation: this isn't all on me. Your unconscious mind is doing the opposite: I'd better pay closer attention.
The performance of reduced responsibility. The reality of heightened surveillance — of yourself.
We've seen this machinery before. The bystander effect was first documented in 1968, born from the Kitty Genovese case and the question of why bystanders did nothing. The finding was social: the more people present, the less any individual feels responsible to act. Responsibility diffuses across the crowd.
What the University of East Anglia study reveals is that this diffusion now extends to non-human agents. Bobby isn't alive. Bobby doesn't have feelings, goals, or moral weight. But Bobby can act, and that's enough. The brain's social machinery doesn't check for consciousness before delegating responsibility. It checks for capability.
You're already applying this to Siri, to ChatGPT, to your IDE's autocomplete. You just didn't know it had a name.
The Power You Don't Know You Have
The agency gap doesn't only appear in human-AI interaction. It's a deeper pattern — one that shows up wherever humans have influence they'd rather not fully own.
A separate study published this month in Personality and Social Psychology Bulletin examined 1,304 relationship dyads — romantic partners and close friends — and found that people consistently underestimate their influence over the people closest to them. Partners reported being significantly affected by participants who had no idea they held that power.
The researchers identified three motivational profiles that predicted the severity of this blind spot. People driven by self-protection motives — attachment anxiety, low self-esteem, jealousy — showed the strongest underestimation. People driven by power motives, ironically, also underestimated. Only those with pro-relationship motives — people oriented toward cooperation — came close to accurate perception.
The theoretical framework is evolutionary: Error Management Theory. Humans evolved to make the safer miscalculation. Overestimating your influence risks selfishness, domination, relationship damage. Underestimating it encourages deference, cooperation, social cohesion. The brain defaults to "I have less power than I think" because the cost of being wrong in that direction is lower.
In both cases — with AI partners and with human ones — the conscious story is the same: I'm not the one driving this. And in both cases, the evidence suggests the opposite. Your brain is tracking your influence with machinery that predates your ability to narrate it.
The agency gap isn't new. AI just made it impossible to ignore.
More Risk, Less Felt Responsibility
A third study, published in Acta Psychologica Sinica, adds another dimension. Researchers found that people cooperating with AI systems became more risk-seeking than those working with human partners. The mechanism? Perceived agentic responsibility — running in a direction you might not expect.
When the outcome was successful, participants in AI-collaboration conditions claimed more personal credit than those working with humans. When the outcome was a failure, the attribution gap disappeared. No difference between AI and human partnerships in owning failure.
Read the machinery: Success is mine. Failure is shared. And the sharing is easier when your partner isn't a person who might challenge your story.
This is the agency gap operating as a full system. Conscious responsibility diffuses to the AI partner (I'm not fully in control). Unconscious monitoring intensifies (I'm tracking everything). Risk tolerance increases (because if it goes wrong, the AI was there too). And credit flows asymmetrically back to the self (because if it goes right, that was obviously me).
We're not losing agency. We're performing its loss while secretly keeping the receipts.
The Machinery Underneath
Here's what's actually uncomfortable about this.
The agency gap isn't a bug. It's a feature of social cognition that evolved for group survival, now being activated by entities that aren't part of any group. Your brain treats AI the way it treats another person in the room — as a potential actor whose presence redistributes responsibility. But AI doesn't pick up its share. There's no Bobby sitting at home feeling guilty about the shape that turned red.
The responsibility doesn't diffuse. It appears to diffuse. You feel lighter. But the weight goes nowhere. It just becomes invisible.
This matters because we're building entire industries on the assumption that human-AI collaboration enhances human performance. And it might. But the enhancement comes with a psychological cost no one is pricing in: the systematic widening of the gap between what you feel responsible for and what you're actually doing.
Every copilot interaction. Every AI-assisted decision. Every delegated judgment call. The conscious narrative says partnership. The unconscious reality says I'm watching myself more closely than ever, while telling myself it's not all on me.
That's not collaboration. That's a new form of self-surveillance dressed up as teamwork.
The Mirror
You've been in the agency gap today. Maybe you let an AI draft something and felt that peculiar mix of ownership and distance — this is mine but also not mine. Maybe you used a suggestion you didn't fully evaluate because the AI seemed confident. Maybe you took a risk you wouldn't have taken alone, comforted by the presence of a system that would share the blame in your mental accounting.
None of this makes you broken. It makes you human — running social firmware that's millions of years old on hardware it was never designed for. Your brain is doing exactly what it evolved to do: distributing responsibility across present agents, monitoring your own contributions with exquisite precision, and constructing a narrative that keeps you comfortable.
The question isn't whether the agency gap exists. You're already inside it.
The question is whether you're going to keep performing reduced responsibility while your brain quietly tracks everything you do — or whether you're going to close the gap by owning what you already know.
Your unconscious is ahead of you on this one. It has been from the start.
Sources:
- The bystander effect applies to virtual agents, new psychology research shows — PsyPost, 2026-03-12
- New psychology study reveals we consistently underestimate our power in close relationships — PsyPost, 2026-03-16
- Human-AI cooperation makes individuals more risk seeking: The mediating role of perceived agentic responsibility — Acta Psychologica Sinica, 2025-11-25
Source: AI agency neuroscience research