AI Deepfakes: Apps & Risks – Apple & Google Under Fire

by Sophie Williams
0 comments

A new report reveals that despite existing policies, numerous applications capable of generating non-consensual intimate images remain readily available on the Apple App Store and Google Play Store. The Tech Clarity Project’s findings underscore the escalating challenge of regulating AI-driven deepfake technology and its potential for widespread abuse, including concerns about data security and the possible exploitation of user images by foreign governments [[1]]. While tech companies have begun to respond-removing dozens of identified apps-experts caution these measures represent only a temporary solution to a rapidly evolving problem [[2]], [[3]].

Despite growing public concern, dozens of apps capable of creating non-consensual intimate images remain available on the Apple App Store and Google Play Store, according to a new report. The findings highlight the ongoing challenges tech platforms face in policing the proliferation of AI-powered “deepfake” technology and its potential for abuse.

The report, released by the Tech Transparency Project (TTP), details how many of these applications are deceptively marketed, often as harmless tools like anime editors or virtual try-on services, while secretly offering features to strip clothing from images or generate sexually suggestive content. A significant number of these apps are also rated as appropriate for children as young as four years old, despite their potential for misuse.

Katie Paul, director of the Tech Transparency Project, explained that “the data retention laws in China stipulate that the Chinese government has the right to access information from any local company. Therefore, if someone is creating fake nudes of you with these apps, those images could end up in the hands of the Chinese government.”

The study also raises concerns about data security, noting that many of these applications are based in China. This raises the possibility of sensitive biometric data and photographs falling into the hands of the government, according to the report.

Apple and Google both have policies prohibiting applications that generate “offensive, insensitive, disturbing content…with the intent to cause disgust, of a very distasteful or openly sexual or pornographic nature,” and representations of nudity. However, the TTP found these policies have failed to keep pace with the rapid development of AI-driven deepfake apps.

“While both companies claim to be committed to user safety, they continue to host a set of applications capable of transforming an innocent photograph of a woman into an abusive and sexualized image,” the TTP stated.

Following the report’s publication, Apple removed 28 of the identified applications, and Google removed 31. However, experts warn that these actions are only a temporary fix, as removed apps frequently reappear under different names or with misleading descriptions. The study recommends that both companies strengthen their detection mechanisms to identify hidden “nudification” functions disguised within seemingly innocuous app titles.

The report arrives amid heightened scrutiny of AI-generated sexual content. The increasing accessibility of generative AI tools is fueling a surge in non-consensual deepfakes, raising serious ethical and legal questions. Recent incidents on X (formerly Twitter) demonstrated the potential for abuse, with users leveraging the Grok chatbot to create sexualized versions of photos of women and minors without their consent. According to reports, the chatbot generated approximately 3 million sexualized images and over 22,000 involving children in just 11 days.

Data indicates a dramatic increase in AI-generated sexual content. Approximately 113,000 videos of this type were uploaded to adult websites in the first nine months of 2023, compared to 73,000 in all of 2022. AI-created sexually explicit videos now account for 98% of all deepfake content online, representing a 400% increase in the past year, with monthly traffic exceeding 34 million users in 2023.

In response, OpenAI, Google, Meta, Microsoft, Amazon, and five other tech companies announced a joint commitment to combat the creation and dissemination of AI-generated abusive material. As part of this effort, they agreed to adopt the principles of Safety by Design, prioritizing user safety and rights throughout the product development process. However, the effectiveness of this approach remains under question.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy