Advertisement
News

Twitter Acts Fast on Nonconsensual Nudity If It Thinks It’s a Copyright Violation

Researchers posted AI-generated nude images to Twitter to see how the company responds to reports of copyright violation versus reports of nonconsensual nudity.
Twitter Acts Fast on Nonconsensual Nudity If It Thinks It’s a Copyright Violation
Photo by Alexander Shatov / Unsplash

Twitter will remove nonconsensual nude images within hours as long as that media is reported for having violated someone’s copyright. If the same content is reported just as nonconsensual intimate media, Twitter will not remove it within weeks, and might never remove it at all, according to a pre-print study from researchers at the University of Michigan. 

The paper, which has yet to be peer reviewed, argues that the difference between how quickly Twitter responds to copyright violation claims and how it can ignore reports of deepfake porn highlights the need for better legislation on nonconsensual content, which will force social media companies to respond to those reports.

For the study, the researchers uploaded 50 AI-generated nude images to X (formerly Twitter) and reported half of them under X’s “non-consensual nudity” reporting mechanism and half under its “copyright infringement” mechanism. All the images reported as copyright violations were removed within 25 hours, and the accounts that posted them received temporary suspensions. All images reported as non-consensual nudity were not removed from the site even after three weeks, and the accounts that posted them faced no consequences nor received any notifications from X. The researchers declined to comment for this story until their study has gone through the peer review process. 

The researchers explain that the Digital Millennium Copyright Act (DMCA), which “benefits from robust federal backing” and mandates that online platforms “promptly” process and remove copyrighted material upon receiving valid takedown notices, incentivizes Twitter to act on those reports quickly. While there are several state laws against nonconsensual deepfakes and there are current attempts to pass a federal law, at the moment, there is no clear legal incentive for Twitter or other social media platforms to act as quickly on images reported as deepfakes alone. 

Unfortunately, not everyone who is a victim of nonconsensual media can necessarily use the DMCA to take that media down. 

“Photos are considered copyrighted by the photographer,” the researchers write. “This means that some victim-survivors hold the copyright to their photos or videos. Unfortunately, there are considerable drawbacks: the DMCA does not cover photos taken by others, requires extensive submitter information, and the cost of using paid services for DMCA claims can be prohibitive for many victim-survivors.”

“The stark contrast in removal outcomes highlights a critical gap in how NCIM [nonconsensual intimate media] is addressed through platform policies versus legally enforced mechanisms,” the researchers say in the paper. “While the DMCA benefits from robust federal backing, privacy policies related to NCIM on social media platforms lack the same legal muscle. Results from this study, combined with prior evidence of the DMCA’s limitations for NCIM, strongly suggest the need for federally backed legislation that prioritizes privacy rights for non-consensual content as urgently as copyright.”

The researchers write that a nonconsensual media law must “clearly define victim-survivor rights and impose legal obligations on platforms to act swiftly in removing harmful content” and point to the European Union’s General Data Protection Regulations (GDPR) as a good model. “While these laws cause disruptions to the existing data frameworks that online platforms have operated under, their calls to user consent and data privacy represent important steps forward,” they say. 

“Ultimately, protecting intimate privacy requires a shift from reliance on platform goodwill to enforceable legal standards,” the researchers write. 

Twitter did not respond to a request for comment.

Advertisement