AI Deepfake Nude Images Have Hit 640 Students Across 90 Schools in 28 Countries, Investigation Finds

AI Deepfake Nude Images Have Hit 640 Students Across 90 Schools in 28 Countries, Investigation Finds

A new investigation by WIRED and Indicator has documented more than 640 students across nearly 90 schools in 28 countries targeted by AI-generated deepfake nude images — a crisis that has moved from isolated incidents to a recognized global pattern of abuse. UNICEF estimates approximately 1.2 million children were victimized across 11 countries in the prior year alone.

How It Happens

The mechanics are straightforward and accessible: teenage boys download photos of classmates from Instagram and other social platforms, then run them through "nudify apps" — AI tools that generate realistic sexual imagery from clothed photos. These apps are cheap, widely available, and require no technical skill. The barrier to creating and distributing this material is effectively zero.

The Goal Is Humiliation, Not Just Gratification

Researchers found a troubling pattern in the motivations behind these attacks. According to the investigation, "the goal is not always sexual gratification — increasingly, the intent is humiliation, denigration, and social control." The images are used as weapons to isolate victims socially, damage reputations, and exert power over targets. This reframes deepfake abuse as a form of sexual violence with coercive and controlling dimensions.

Platforms Are Failing to Act

Despite promises to crack down on nudify apps, platform enforcement remains inconsistent and inadequate. Meta was found to be continuing to run advertisements for flagged tools even after prior warnings — a direct contradiction of stated policy. The gap between platform commitments and actual enforcement is enabling ongoing abuse at scale.

Legal and Policy Gaps

Most jurisdictions lack specific legislation criminalizing AI-generated non-consensual intimate imagery of minors. While child sexual abuse material laws may apply in some cases, the legal frameworks were written before AI generation made this type of content trivially easy to produce. Schools and parents are often left without clear legal remedies when incidents occur, and law enforcement lacks both the tools and the jurisdiction to respond effectively across international borders.

The Bottom Line

The deepfake nudification crisis in schools is no longer a fringe problem — it's a documented global phenomenon affecting hundreds of schools across dozens of countries. Without specific legislation, consistent platform enforcement, and better detection tools, the scale will continue to grow. The technology enabling this abuse is already widely deployed; the response has not kept up.

Related Articles

Sources