Advertisement
News

1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds

A new survey attempts to quantify just how common it is for minors to AI-generate nudes of other minors.
1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds
Photo by Etienne Girardet / Unsplash

One in 10 minors reported that their friends or classmates have used AI tools to generate nudes of other kids, according to a survey from Thorn, a non-profit with a mission to build technology to defend children from sexual abuse. 

We’ve long known that AI tools were commonly used among minors and that nudify apps in particular, which can take a photograph of anyone and use AI to make them appear nude, have spread chaos and panic through several schools across the country, but the Thorn survey, conducted in partnership with research firm BSG, is one of the first attempts we’ve seen to measure the scope of the problem. 

The survey, conducted online between November 3 and December 1 in 2023, asked 1,040 minors between the ages of 9 and 17 questions about child sexual abuse material and “harmful online experiences.”  

“Minor participants were recruited via existing youth panels or directly through caregivers at the time of this survey,” a report on the results of the survey, titled “Youth Perspectives on Online Safety, 2023,” said. “Caregiver consent was required for minors’ participation in youth panels, as well as for those minors recruited directly for the survey.”

A couple of important caveats here before we move forward: Thorn itself does not have a perfect track record when it comes to sex and technology. The organization has been criticized by privacy experts for providing police with a tool that scrapes sex workers’ advertisements into a database, and Thorn’s founder Ashton Kutcher stepped down last year after supporting convicted rapist and former That ‘70s Show co-star Danny Masterson. I also wrote a piece in April noting that Thorn had partnered with Google, Meta, Microsoft, Civitai, and other big tech companies to publicly commit to “safety by design” principles to “guard against the creation and spread of AI-generated child sexual abuse material (AIG-CSAM),” while all those companies were still actively enabling that exact type of AI-generated harmful images. 

In addition to that, like many “anti-human trafficking” organizations, and as the language of the report itself makes clear, Thorn has an alarmist view of the way minors sext, send nudes, or otherwise use the internet as sexually aware and active human beings. For example, Thorn’s survey finds that one in seven “minors have shared their own SG-CSAM,” referring to “self-generated child sexual abuse material.” While these images are technically illegal and against the policies of all major internet platforms, the scary sounding term also covers instances in which, for example, a 17-year-old consensually sends a nude to their partner. 

Caveats aside, Thorn’s stated methodology in this survey appears solid and gives us a rough idea of how commonly AI tools are used among minors to generate nonconsensual nude images of their peers: one in 10 said that their friends or classmates used AI tools for this purpose, and another 1 in 10 said they “prefer not to say.”

🏫
Is AI-generated content a problem in your school? I would love to hear from you. Using a non-work device, you can message me securely on Signal at ‪(609) 678-3204‬. Otherwise, send me an email at emanuel@404media.co.

“While the motivation behind these events is more likely driven by adolescents acting out than an intent to sexually abuse, the resulting harms to victims are real and should not be minimized in attempts to wave off responsibility,” the report says. 

To get an idea of the chaos these types of images can have on a community, read our piece from February about a case in a Washington State high school where several students were investigated by the police for using an “undress app” on their teachers and classmates. The use of these apps has spread to other schools across the country, and in March resulted in the arrest of two middle school students in Florida. 

“Significant work is needed within the tech industry to address the risks posed by generative AI technologies to children,” the report says. “However, that work is not a gatekeeper to addressing the behaviors surrounding these harms. While the technology may be new, peer-based bullying, harassment, and abuse are not. It is critical that we speak proactively and directly about the harms caused by ‘deepfake nudes’ and reinforce a clear understanding of what conduct is unacceptable in our communities and schools, regardless of the technology being used.”

Advertisement