Advertisement
Deepfakes

Schools Are Failing to Protect Students From Non-Consensual Deepfakes, Report Shows

A survey of teachers, students and parents showed that schools are unprepared for non-consensual imagery, AI generated or otherwise, spreading throughout communities of young people.
Schools Are Failing to Protect Students From Non-Consensual Deepfakes, Report Shows
Photo by Kenny Eliason / Unsplash

 A new report surveying thousands of teachers, parents, and students reveals that schools are dropping the ball when it comes to preventing non-consensually sexually explicit material from spreading throughout their communities, including AI-generated images. 

The report, published today by the Center for Democracy and Technology, is based on nationally representative surveys from July through August of 6th-12th grade public school teachers and parents, and 9th-12th grade students.

According to the report, 40 percent of students and 29 percent of teachers said they know of an explicit deepfake depicting people associated with their school being shared in the past school year. The repercussions for being caught making and spreading these images tended to be severe: Seventy-one percent of teachers reported that students who were caught sharing sexually explicit, non-consensual deepfakes were referred to law enforcement, expelled from school, or suspended for more than three days.  

In 2023, in the first arrest of its kind, police arrested two middle schoolers for allegedly creating and sharing AI-generated nude images of their 12 and 13 year old classmates. Police reports obtained by WIRED showed that the children used an application to make the images; they were arrested under Florida state law which makes it a felony to share ‘any altered sexual depiction’ of a person without their consent.

But schools aren’t doing enough to prevent these harms in the first place, the report claims. Real-world incidents back this up: Police reports obtained by 404 Media earlier this year showed how a high school’s administration and police struggled to address a wave of AI-powered “nudify” apps creating chaos among the student body. The school’s administrators failed to report that students used AI to take photos from other students’ Instagram accounts and used them to “undress” around seven of their underage classmates, and put a staffer—who was also a victim of a nonconsensual AI-generated nudes—in charge of the school’s internal investigation. Police found out about what was happening from three separate parents, not school officials or mandatory reporters, and law enforcement characterized the incidents as a possible sex crime against children. 

“Schools’ hyperfocus on imposing serious consequences on perpetrators, including handing students over to law enforcement, does not alleviate a school of its responsibility to address sexual harassment under Title IX,” Elizabeth Laird, Director of the Equity in Civic Technology Project at CDT, said in a press release. 

“Nudify” and “undress” apps are easy to use and find online and are contributing to the epidemic of explicit deepfakes among teenagers. Last month Emanuel reported that Google was promoting these apps in search results: “Google Search didn’t only lead users to these harmful apps, but was also profiting from the apps which pay to place links against specific search terms,” he wrote. 

1 in 10 Minors Say Their Friends Use AI to Generate Nudes of Other Kids, Survey Finds
A new survey attempts to quantify just how common it is for minors to AI-generate nudes of other minors.

Last month, a survey by non-profit Thorn found that one in 10 minors reported that their friends or classmates have used AI tools to generate nudes of other kids. But it’s not just students who are being targeted by deepfakes and non-consensual imagery in schools. “Teachers report that they and their peers are nearly as likely to be depicted in deepfake NCII [non-consensual intimate images] as students,” the report says. “Among teachers who have heard about deepfake NCII being shared at their school, 43 percent say that a teacher, administrator, or other school staff member was depicted compared to 58 percent who say that a student was depicted.” 

In April, a Maryland high school teacher was arrested for allegedly creating AI-generated audio recordings that made it seem like his school’s principal made racist comments, and a student in Texas was accused of making sexually explicit deepfakes of a teacher and sharing them online. But AI isn’t necessary to make non-consensual fake images: In 2023, a student allegedly used Photoshop to put a teacher’s face onto a “sexually suggestive” image of someone else’s body and posted it to Instagram, tagging other kids in the post. 

Not surprisingly, when CDT asked for their opinions about consequences for making and sharing deepfake NCII, adults had a more nuanced opinion than students. “When asked about the appropriate punishment, students report that they are significantly more likely to support severe consequences against their peers for this conduct than adults,” the report says. “Forty-two percent of students indicate that they do not think sending a student to jail is too harsh a punishment for sharing deepfake NCII – as opposed to parents, of whom only 17 percent and 27 percent approve of law enforcement referral for a first and second offense, respectively.” 

Teachers’ insights into how schools handle deepfakes show that schools are taking a reactive, rather than proactive, approach to addressing the problem. “Teachers report that schools are doing little to enact policies and practices that proactively prevent students from sharing NCII, as previewed in the previous section,” the report says. “Few teachers report that their schools have shared policies or practices that proactively address authentic or deepfake NCII. Sixty percent of all teachers surveyed report that they have not heard of their school or school district sharing policies and procedures with teachers about how to address authentic or deepfake NCII.” 

Deepfakes in schools is an issue in schools worldwide. In Australia earlier this year, a teenager was arrested after about 50 female students reported fake nudes, using images taken from their social media accounts and seemingly manipulated with AI, started circulating online. Similar incidents have happened in Spain.

CDT’s report recommends schools improve support for victims, educate students and teachers on the harms and repercussions of deepfakes, and provide training on trauma-informed responses when non-consensual explicit imagery does spread. “Our research shows that schools that have experienced these issues are more likely to respond by taking action and that

teachers, students, and parents are more knowledgeable and prepared as a result,” the report says. “Taking urgent and decisive action to curb these harms is crucial, and one that carries significant legal consequences for perpetrators and schools themselves. Recognizing the harm of image-based sexual abuse, forty-nine states have passed laws that enact civil and criminal penalties for engaging in this type of conduct.”

“Without meaningful efforts toward prevention, everyone involved is worse off,” Laird said. “Schools should act now by addressing the deficiencies in prevention measures, improving support for victims, and involving parents as they develop policies on deepfakes and NCII.”

Advertisement