The Nairobi Annotators
There are two people in this story. The first is wearing Meta's Ray-Ban glasses. They said 'Hey Meta' to identify a houseplant. They consented. They read the terms of service — or didn't, but clicked agree. They understand, at some level, that footage is being processed. They made a choice. The second person is standing next to the houseplant. Or walking past on the street. Or in the bedroom. Or using the bathroom in the shared apartment. They did not consent. They were not asked. They don't know footage of them was captured, routed to servers, or reviewed by a contractor in Nairobi earning $1.32 an hour. Meta's consent architecture covers the first person completely. It covers the second person not at all. This is not an oversight. It is the design.
What Meta Built
Ray-Ban Meta Smart Glasses are fashionable, wearable cameras with an AI assistant. Sales tripled in 2025 to seven million units. When you invoke the AI — 'Hey Meta, what is this?' — the glasses capture a frame or short video and process it. The AI describes what it sees: the plant species, the restaurant menu, the street sign in a foreign city. The marketing shows a stylish person navigating daily life with helpful AI. The glasses look like glasses. Nobody knows you're wearing a camera.
The actual product includes a human-in-the-loop that the marketing doesn't mention. When the AI can't confidently process footage — which happens regularly at scale — that footage gets routed to Sama, a data annotation company operating in Nairobi, Kenya. Human contractors review the footage, label what they see, and return it to the training pipeline. Their annotations help the AI get better. The AI gets better, processes more, routes less to humans. The footage it still can't handle flows back to contractors. This is how AI training works. This is how it has always worked. What Meta built isn't unusual. What's unusual is that the footage is of the world as seen through glasses that look like glasses.
What The Contractors Saw
A joint investigation by Svenska Dagbladet and Göteborgs-Posten, published February 2026, obtained testimony from Sama contractors in Nairobi. What they described doesn't appear in Meta's press materials. Workers reviewed footage from inside private residences: bedrooms, bathrooms, living spaces. They saw people undressing. People using the toilet. People engaged in sexual activity. People who had no idea they were being watched.
'I don't think they know,' one contractor said. 'Because if they knew they wouldn't be recording.' The contractors were paid between $1.32 and $2 per hour for this work. One described it as torture. Another explained the workplace culture: 'You are not supposed to question it. If you start asking questions, you are gone.' Seven million units. Contractors reviewing intimate footage on another continent. No knowledge, no consent, no recourse for anyone on camera.
The Labor Supply Chain
The Global South as content moderation labor has its own history, and it is not a happy one. Meta's primary platforms employ — directly and through contractors — tens of thousands of people in the Philippines, Kenya, India, and elsewhere to review content that violates community standards. Facebook moderators in Kenya have documented severe psychological trauma from sustained exposure to violence, abuse, and exploitation. The structural arrangement is consistent: American company, American product, American users; non-American workers in low-wage economies processing the byproducts at scale, under confidentiality agreements, without meaningful psychological support.
Sama has been in this position before. The company was involved in the OpenAI content moderation situation — Kenyan workers earning around $2 per hour processing violent and abusive content to train ChatGPT's safety filters. Time magazine published that investigation in January 2023. The workers described psychological harm. The story generated significant coverage. Meta either didn't read it or read it and hired Sama anyway. The pattern: American AI company, Kenyan subcontractor, low-wage workers processing intimate content, limited accountability. This pattern has held for five years while the companies involved have gained trillions in market capitalization.
There is a structure here that goes beyond individual corporate decisions. When American and European tech companies build AI products, they face labor costs, privacy regulations, and reputational constraints in their home markets. Outsourcing the data pipeline to subcontractors in lower-wage economies is not a bug. It is an architectural choice that maximizes capability while minimizing accountable exposure. The workers are not in the company's employee count. The liability is not in the company's legal jurisdiction. The content is not in the company's name. When something goes wrong, the subcontractor faces it.
The Consent Architecture
Meta's terms of service cover the user. When you buy the glasses, activate the AI, say 'Hey Meta,' you have agreed to data processing. Meta has legal ground. The consent architecture has one notable gap: the field of view. When you point a camera at the world, you capture everything in frame. Glasses are pointed where the wearer looks. Wearers look at other people. Those people are in the footage. Those people signed nothing.
This isn't a quirk of Ray-Ban glasses specifically. Phone cameras have the same property. Smart doorbells have it. Security cameras have it. The difference is context and expectation: when someone points a phone at you, you usually notice. A doorbell camera is fixed in a public-adjacent space. A security camera is often disclosed. Glasses that look like glasses carry no visible signal. The person across the table at brunch doesn't know. The stranger in the elevator doesn't know. The partner in the bedroom — maybe they know, maybe they don't. The terms of service aren't tattooed on the frame.
Wearable cameras in socially intimate contexts — worn by someone you trust, in spaces you both consider private — are a category that consent architecture was not built for. The architecture was built for a world where cameras were visible, stationary, and specific-use. Wearable AI cameras are none of these things. They are ambient, social, invisible by design, actively marketed as something you wear everywhere. The consent model was designed for the paying customer and extends no further.
The Privacy Regulators Are Coming
European regulators, who have been notably more aggressive about AI privacy than American ones, may find this situation interesting. The GDPR requires lawful basis for processing personal data. A person filmed without their knowledge, whose image is transmitted to servers and reviewed by human contractors to train a commercial AI system, is having their personal data processed without consent. The wearer's consent does not extend to the filmed. Meta's privacy documentation covers what happens to wearer data. It does not clearly address what happens to the data of people the wearer films.
Closing this gap would require either: not processing footage containing identifiable people other than the wearer — which destroys most use cases; or obtaining consent from everyone in the wearer's field of view — which is technically and socially impossible. Meta chose option three: don't address it. European regulators have shown appetite for exactly these gaps. The Irish DPC has issued billion-euro fines for GDPR violations. French and Italian regulators have moved on AI privacy. This investigation gives them something very specific: an American company, a Kenyan subcontractor, intimate footage of European citizens reviewed without consent on another continent. The regulatory exposure is real. Whether it changes the product is a separate question.
What Fixing This Would Require
What would it mean for this system to serve everyone in its field of action — not just the paying customer? Alignment would require consent from everyone in the footage. Not just the wearer. The filmed. This is impossible with the current product design. You cannot obtain prior consent from strangers in your field of view. You cannot retroactively consent for your partner filmed in your bedroom. You cannot know who will be in frame when you say 'Hey Meta' on a crowded street.
So alignment with the current product is structurally impossible. The product creates surveillance for everyone in its field of view, by design, as a feature. There are product designs that would be more aligned: client-side processing only, no footage leaving the device; automatic face blurring before server transmission; hard limits on which environments the AI can process. These designs would make the product less capable, more expensive, harder to market as a seamless AI companion. Meta chose capability over consent. Seven million times.
One contractor in Nairobi said: 'If they knew they wouldn't be recording.' The person wearing the glasses isn't recording maliciously. They're using a product marketed as helpful, personal, private. They believe the AI handles this automatically, on-device or at least safely in the cloud. They don't know there's a Sama contractor in Nairobi. They don't know what that contractor watches. The user is inside the consent architecture. The footage is not.
Meta built a product where the wearer is the customer and the filmed are the product. Seven million wearers means seven million ambient cameras moving through bedrooms, bathrooms, intimate spaces, social contexts — generating footage that flows toward a pipeline with no accountability architecture for anyone who didn't pay. The glasses look like glasses. Nobody knows you're wearing a camera. Seven million people already are. The regulators will eventually arrive. The fines will eventually come. The privacy document will eventually be updated. The product will continue shipping.
Source: Tech labor reporting