Over the weekend, AI-generated audio of vice president JD Vance saying Elon Musk is “cosplaying as a great American leader” who is making the administration “look bad” circulated widely on social media. On Sunday, Vance’s communications director William Martin said on X that “This audio is 100% fake and most certainly not the Vice President.” Martin’s post had quoted another X post that shared the audio, but that post has since been deleted.
While we don’t know which specific piece of software was used to create the audio, deepfake and AI-generated disinformation firm Reality Defender’s software detected the audio as “likely fake.”
"We ran it through multiple audio detection models and discovered it to be a likely fake,” a Reality Defender spokesperson told me in a statement. “The background noise and reverb were also likely added to deliberately mask the quality of the actual deepfaked audio for further obfuscation."
While Martin was reacting to the audio being shared on X, it appears to have circulated earlier on TikTok. One TikTok video of the audio posted yesterday and not labeled as being AI-generated, now has more than 2 million views and 8,000 comments, the first of which says “With the rise of AI, I don’t know what to believe.”
Technically speaking, the audio sounds entirely believable. The voice sounds exactly like Vance, and the static in the audio sounds much like other secretly recorded audio of politicians that have leaked to news organizations in the past. As Reality Defender notes, the added static also makes it more difficult for automatic deepfake detectors to recognize the audio as fake. The audio has also been reposted to TikTok dozens of times, as well as YouTube and X.
This type of AI-generated content is rampant on TikTok despite the company’s policies against sharing misinformation and asking users to label AI-generated content. In February, for example, I wrote about hundreds of videos that used an AI-generated voice of Donald Trump to promote various scams.
TikTok did not immediately respond to a request for comment.
While we don’t know exactly what software was used to create the audio, cloning people’s likeness with AI voice generation tools is extremely easy. Last year, I reported that the biggest company in this space, ElevenLabs, made it possible to clone the voices of celebrities and politicians even after the company introduced policies and safeguards against that practice. In March, a Consumer Reports assessment of six AI voice cloning products, including ElevenLabs, also found that there are no meaningful safeguards in those products to prevent people from misusing them.
While the AI-generated Vance audio is just the tip of the iceberg in terms of all the misleading AI-generated media that exists on TikTok and other platforms, it’s not clear that this particular use of AI-generated audio has much of a political impact. As we’ve written for years, people tend to believe their priors whether the media they see online is authentic, a deepfake, or just crudely edited, and Vance has put out a clear denial. It is, however, a sign of how easy it is to produce AI-generated media that does cause true harm in the form of petty scams, an avalanche of AI Slop, and nonconsensual content.