Workers in Kenya Are Watching Your Private Ray-Ban Meta Glasses Footage

A Swedish investigation has revealed what many privacy advocates feared: workers in Nairobi are watching your private Ray-Ban Meta glasses footage. Bathroom visits, intimate moments, credit card details — all being reviewed by data annotators at Sama, Meta's subcontractor in Kenya, to train the company's AI.
What the Investigation Found
Reporters from Svenska Dagbladet traveled to Nairobi and interviewed Sama data annotators who described viewing deeply private video clips from Western homes. Their job is to manually identify objects in footage captured by Meta's smart glasses to train AI models. The content they described includes:
- Bathroom visits — people recorded without their knowledge
- Sexual acts — intimate footage captured by glasses-wearing users
- Credit card details — financial information clearly visible
- Children and family members — people who never consented to being filmed
Workers described feeling uncomfortable going to work, knowing what they'd be asked to review that day.
Meta's Defense — and Its Holes
Meta claims faces in annotation data are automatically blurred to protect privacy. But annotators in Kenya reported that this anonymization frequently fails — faces are sometimes clearly visible, especially in difficult lighting conditions. Ex-Meta staff confirmed that faces are "supposed to be blurred" but acknowledged the system isn't perfect.
There's a fundamental problem with Meta's approach: even with blurred faces, the context of the footage — someone's bathroom, bedroom, or private moments — is itself deeply personal information. Blurring a face doesn't make a bathroom video not a bathroom video.
The Consent Problem
The people being filmed by Meta glasses wearers never consented to having their footage sent to Kenya for human review. The glasses wearer might have agreed to Meta's terms of service, but the person standing across from them at dinner, or the family member walking past in their home, certainly didn't.
This is the fundamental tension with always-on camera devices: the person wearing them makes a choice. Everyone around them doesn't.
The Bottom Line
Meta is building AI that understands the visual world — and the training data comes from real people's real lives, captured without their knowledge, reviewed by workers in another country who are paid a fraction of what the footage's subjects earn. The privacy implications are staggering, and "we blur the faces" is not an adequate response when the content itself is the violation.