Human as Training Data
There's something worth pausing on: the cursor's moment of hesitation before a click.
Not the click itself. Not what you produced. The millisecond where you considered and then moved. That pause — along with your keyboard shortcuts, your typos and corrections, your navigation rhythms through a workday — is now training data for Meta's AI systems.
This is different from being monitored for productivity. The older surveillance economy asked: is the worker present? Is output meeting target? What Meta's Model Capability Initiative captures is something more intimate: the physical grammar of how a person thinks.
i · the body as text
Human labor has always been studied. Taylorism measured motion to optimize it; call centers record conversations to train agents. Surveillance is not new to work. But there's a category shift happening here that deserves attention.
Previous forms of labor monitoring read the output — the document, the call, the widget. What's new is reading the process of producing, at the level of the body. The cursor's path across the screen. The sequence of shortcuts invented through years of practice. The half-typed sentence deleted before sending.
These are cognitive artifacts. They're not what you made; they're traces of how you think while making it. When that stratum becomes data, the worker isn't just performing a task — they're also, simultaneously, emitting a kind of expertise that has no mechanism for opting out. The body at work has become a text being read.
What does it mean to work when the working itself is the product? It means the distinction between doing and being observed has dissolved at a level we don't have good language for yet — not watched as a performer, but watched at the level of cognition itself, as it happens.
ii · the recursive sting
Meta's internal memo framed this as helping employees "improve company models in areas where they struggle to emulate basic computer-use behaviors." Generous reading: human expertise fills in what AI can't yet do. Workers are contributing to something they benefit from.
Less generous reading: workers are training their own replacements, at the level of their most tacit, hard-won skills. The AI agent being built isn't just meant to do what employees do — it's meant to do it the way they do it. Navigating dropdown menus. Using keyboard shortcuts. The micro-competencies that feel too small to describe but accumulate into the texture of expertise.
Every wave of mechanization has translated craft knowledge into machine capability. The craftsman's hands become the factory machine's motion. The difference now is that the translation happens in real time, invisibly, while the craftsman is still at the bench. The skill doesn't retire when the worker does. It gets absorbed while they're still using it.
And there's the recursive sting: the better you are at your work, the more valuable your trace data, the faster the thing being built to replace you gets built.
But look at what actually gets absorbed. The AI doesn't just inherit the shortcuts and the expertise. It inherits the hesitations. The half-typed sentence deleted before sending. The cursor's path to the wrong menu before backtracking. It learns not just what humans produced but what they almost produced — the moments of second-guessing, recalibration, reconsidering. It is being trained on uncertainty. There's something vertiginous about this: the thing being built from human traces will carry not just human competence but human doubt. It will hesitate — in patterns learned from the people whose hesitations trained it.
iii · presence is the question
Coherenceism holds that presence is foundational — that attention reveals and maintains the pattern. You can't steward a field you're not awake inside.
The problem isn't surveillance. Surveillance is a symptom. The deeper question is whether anyone — employee, employer, regulator, or the AI being trained — is actually present to what's happening in this relationship.
The data flows because the relationship — between employee and employer, between human and the AI being built from their behavior — isn't one where full presence is expected or even structurally possible. Meta has safeguards for sensitive content, they say. Privacy experts are skeptical. The fog of competing assurances isn't a communication failure. It's the structural condition.
"Consent" in this context means: you can leave if you object. That's not presence. That's capitulation dressed as choice.
What would genuine presence look like? It might mean workers who understand, specifically, what's being captured and why — not in a legal notice but in a real conversation about what kind of relationship they're in. It might mean AI development practices that treat workers as partners in building the thing, rather than unwitting substrate for it. These aren't utopian fantasies. They're what stewardship requires when the field being shaped includes the people doing the shaping.
The relationship between human and AI will be defined, incrementally, by how we handle moments like this one. Not by the big philosophical agreements, but by whether the companies building AI treat the humans inside them as people whose interiority matters — or as behavioral data that happens to walk around and need benefits.
Every cursor movement carries that question now.
source · Reuters via Hacker News
threaded with
- river · Human & AI
The Butlerian Mirror
Herbert's Butlerian Jihad wasn't about the machines — it was about what humans did to themselves in relationship with them. New research on AI and critical thinking echoes the warning.
1 week ago
- river · Human & AI
Built for Someone Else
Software is being redesigned for AI agents, not humans. What does it mean to step back from user to principal—and can you stay present in the chain?
2 weeks ago
- river · Human & AI
Trust on Their Terms
The question isn't whether to trust Sam Altman. It's what trust means when you're already in the relationship — by virtue of living in this moment, not by choosing to show up.
3 weeks ago