AI Nudity Tools: Rise in Abuse & Calls for Ban

by Michael Brown - Business Editor
0 comments

Amsterdam and Offlimits, an expert agency on online abuse, are calling for a worldwide ban on “nudify” apps – AI tools that create realistic nude images from ordinary photos without consent. The move comes amid growing concerns about the utilize of these technologies for sextortion, cyberbullying, and the sexual abuse of children.

Research from Offlimits indicates that 15.6 percent of men aged 18 to 25 report having seen images of child sexual abuse, whether real or AI-generated. A separate study conducted by the television program Pointer among over 60 secondary schools revealed that nearly 24 percent of schools have dealt with incidents of bullying using AI-generated nude images, with 10 percent reporting more than five such cases annually.

“Seksueel misbruik laagdrempeliger”

The technology is readily accessible online, prompting calls for platforms, app stores, and hosting services to remove these tools. Over 100 organizations, including the city of Amsterdam, Offlimits, Interpol, and the National Rapporteur on Trafficking in Human Beings and Sexual Violence against Children, are supporting a manifesto urging the Dutch government and the European Union to explicitly prohibit the technology. This coordinated effort underscores the increasing international focus on regulating AI and protecting vulnerable populations.

“Amsterdam is investing in digital resilience, but children cannot stand up against an online environment that facilitates this kind of abuse,” said Amsterdam Alderman Alexander Scholtes (ICT and Digital City). “It is unacceptable that we are making sexual harassment and abuse increasingly accessible through the ease of nudify tools. That is why Amsterdam is signing this manifesto: we must have the courage to prohibit anything that is legal but demonstrably causes harm.”

The call for a ban reflects a broader debate about the ethical implications of artificial intelligence and the demand for proactive measures to mitigate potential harms. The ease with which these apps can generate exploitative content is raising alarm bells among child safety advocates and law enforcement agencies globally, as highlighted in a statement from Child Helpline International.

According to data from the IWF, in 2024, 98 percent of confirmed AI-generated child sexual abuse imagery where sex was depicted involved girls, as reported in a recent report. The growing prevalence of these images is fueling calls for urgent government action and industry self-regulation.

The Dutch online abuse agency’s advocacy aligns with a growing international movement to address the risks posed by AI-powered image manipulation, as detailed in reporting from NL Times. The debate over how to balance innovation with safety is likely to intensify as AI technology continues to evolve.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy