Chris VallanceSenior technology reporter

Getty Images
Meta founder Mark Zuckerberg demonstrated the Ray-Ban Meta glasses in September 2025
The UK data watchdog is writing to Meta following a "concerning" report claiming outsourced workers were able to view sensitive content filmed by the company's AI smart glasses.
Meta said subcontracted workers might sometimes review content, including films and images, captured by its AI smart glasses for the purpose of improving the "experience".
Videos, including of glasses-wearers using the toilet or having sex, are sometimes reviewed by a Kenya-based Meta subcontractor, according to an investigation by Swedish newspapers Svenska Dagbladet (SvD) and Goteborgs-Posten (GP).
"We see everything - from living rooms to naked bodies," one worker reportedly said.
Meta said it took the protection of people's data very seriously, and was constantly refining its efforts and tools in that area.
"Ray-Ban Meta glasses help you use AI, hands free, to answer questions about the world around you," the tech giant told BBC News.
"When people share content with Meta AI, like other companies we sometimes use contractors to review this data to improve people's experience with the glasses, as stated in our Privacy Policy," it added.
"This data is first filtered to protect people's privacy."
According to Meta, filtering could include blurring faces in images - but sources who spoke to SvD and GP said sometimes this failed and peoples faces could be seen.
Users have to activate recording manually or through a voice command, but may not realise their videos and images are sometimes reviewed by humans - as described within Meta's extensive privacy policies and terms of service.
In Meta's UK AI terms of service the company says "In some cases Meta will review your interactions with AIs... and this review may be automated or manual (human)."
But the UK's data watchdog, the Information Commissioner's Office (ICO), told BBC News "devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency".
"Service providers must clearly explain what data is collected and how it is used," it said in a statement.
"The claims in this article are concerning. We will be writing to Meta to request information on how it is meeting its obligations under UK data protection law."

Getty Images
Sama's office in Nairobi, pictured in 2023
The workers the Swedish papers spoke to were data annotators, teaching Meta's AI to interpret images by manually labelling content.
The BBC has approached Sama for comment on the report.
The workers said they also reviewed transcripts of interactions with the AI to check it had answered questions adequately.
They described privacy protections in their workplace, with cameras everywhere, and no mobile phones permitted.
But the content they saw was often extremely sensitive, they said, including glasses-wearers watching pornography.
In one instance, a worker told the newspapers, a man's glasses were left recording in a bedroom where they later filmed a woman, apparently the man's wife, undressing.
Meta's glasses have a light in the corner of the frames that is turned on when the built-in camera is recording images or videos.
The firm warns against misuse of the tech, advising users to show others when the recording light is on and avoid recording in private spaces.
BBC News has approached the glasses-makers' parent company, EssilorLuxottica, for comment.
Rapid advancements in AI have resulted in a proliferation of wearable gadgets that use AI to interpret images and sounds captured by the device.
Features can include translating text, or responding to questions about what the user is looking at - a particularly useful feature for those who are blind or partially sighted.
However, as the devices have grown in popularity, so too have concerns about their misuse.
Women have previously told the BBC they were filmed without their consent by users of smart glasses.
Data annotator Sama began as a non-profit organisation, with the aim of increasing employment through the provision of tech jobs.
It is designated as an "ethical" B-corp but a previous contract providing content moderation services to tech companies attracted criticism, alongside legal action by former employees.
It has since stopped content moderation services and later said it regretting taking on this kind of work.



6 hours ago
3









