Home » Latest News » Business » Meta Ray-Ban: Privacy Concerns & Data Sharing Exposed

Meta Ray-Ban: Privacy Concerns & Data Sharing Exposed

0 comments

Technology companies have been striving to integrate smart glasses into everyday life for years. Google, for example, once promoted Glass, but the device faced criticism over privacy concerns. Meta took a different approach, creating glasses in the Ray-Ban style that are almost indistinguishable from regular-framed glasses.

This camouflage has made the device popular, with millions of units sold worldwide. However, users are increasingly interested in exactly what these glasses record and process, particularly regarding the artificial intelligence (AI) functions responsible for image and sound processing.

Journalists from Sweden have discovered that the device transmits not only regular video and conversation fragments to the company’s servers, according to watson.de. It turns out that recordings, which users would never want to show to strangers, are likewise sent for processing.

Investigation: Meta Gains Access to Extremely Personal Material

According to materials from Svenska Dagbladet and Göteborgs‑Posten, Ray-Ban and Oakley glasses automatically send audio, photos, and videos to Meta servers. These data then go to a third-party company, Sama, located in Nairobi, Kenya. There, employees manually label the materials to improve AI algorithms.

During their work, Sama specialists see a wide variety of scenes. Journalists were told about recordings from bedrooms and living rooms, footage of people getting dressed, using the restroom, and even intimate encounters. Sources indicated that people’s faces and bodies are sometimes clearly visible.

At the same time, Sama employees are not allowed to discuss what they see, as they are bound by strict confidentiality agreements and office security rules.

Algorithm Errors and Lack of Anonymity

Meta officially claims that such recordings should not be used for AI training. Faces, bank cards, and other sensitive data are supposedly automatically obscured. However, as former Meta employees stated, the system is prone to glitches.

Algorithms sometimes become unreliable, especially in low light. Images of bodies and faces are often not sufficiently blurred. Journalists personally tested the glasses’ functions and found that users can disable data transmission, but doing so disables the AI functions.

This creates a limited choice: either privacy or the capabilities of artificial intelligence. The findings raise questions about the trade-offs consumers are making when adopting these fresh technologies.

Working Conditions in Kenya: Surveillance and Minimal Wages

The investigation also revealed that the employees analyzing Meta’s data are under strict surveillance. For example, cameras are installed in the offices, phones are prohibited, and discussing working conditions risks dismissal.

Here’s what is known about the contractor, Sama:

-The company previously worked for OpenAI and Facebook;
-Employees were exposed to disturbing content, including scenes of violence;
-Wages ranged from only $1.32 to $2 per hour;
-Following complaints about trauma and stress, Sama discontinued content moderation in 2023.

The magazine Time partially confirmed these details through its own investigation.

Conflict with European Legislation

Countries within the European Union operate under strict data protection regulations – the DSGVO (known as GDPR). This regulation requires companies to ensure an equivalent level of protection for suppliers, even if they operate outside of Europe. Sweden’s regulator, IMY, reminds that the processing of user data in third countries must comply with European standards.

However, the EU has not issued a decision stating that Kenya has an adequate level of data protection. Transferring videos to Nairobi for processing could be considered a violation of European legislation.

Meta explains that it uses a global infrastructure because it operates worldwide. However, the legality of such data transfers remains open to question.

How This Affects Users

The investigation raises serious questions about how safe it is to use Meta’s AI glasses in Europe. The device may record more than it appears, and data may end up in the hands of third parties.

disabling data transfer deprives the glasses of their core functions. Users are, in effect, forced to sacrifice either convenience or privacy. The situation underscores the growing need for clearer regulations surrounding data privacy in the age of AI-powered devices.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy