“Workers See Everything”: Meta Faces Backlash Over Smart Glasses Footage
The Math Mela at Indore Deaf Bilingual Academy showcased how deaf students used ISL and creative real-life activities to make mathematics engaging, practical, and inspiring.
In March 2026, the allure of the Ray-Ban Meta smart glasses was shattered by a disturbing revelation regarding the human cost of AI training.
While the device is marketed as a seamless blend of fashion and technology, investigations revealed that personal videos captured by users are being transmitted to human reviewers in Nairobi, Kenya.
These contractors are tasked with watching and labeling footage to refine Meta’s AI algorithms, effectively turning private living spaces into data sets for strangers thousands of kilometers away.
The nature of the content being reviewed is deeply invasive, with workers reporting exposure to highly intimate moments.
Reviewers have described seeing users in bathrooms, during medical vulnerability, or in states of undress. Furthermore, sensitive information like bank card details and private digital conversations are often captured in high definition.
Despite Meta's assurances that safety measures like face-blurring are in place, whistleblowers claim these protections frequently fail, leaving the identities of both the wearers and unsuspecting bystanders exposed.
The controversy highlights a significant "transparency gap" between tech giants and their consumers. Most users remain unaware that engaging with "Hey Meta" AI features grants the company permission to subject their footage to human oversight. This lack of clarity culminated in a major U.S. lawsuit in early March 2026, accusing Meta of deceptive data practices.
