A version of this piece originally ran on DexDigi, Dexter Thomas’s newsletter, and a video version ran on his YouTube. Please consider subscribing to both.
The Hollywood sign was on fire; or at least, it was online. Those images and videos were fake and AI-generated. Maybe you already knew this. But people made money off of them, anyway.
At the beginning of January, the Instagram account FutureRiderUS was posting AI videos of a motorcycle riding through futuristic landscapes – hence the name. Those videos usually would get anywhere from 20k to 30k views. But then, the fires started.
The next day, FutureRiderUS posted its own flaming Hollywood sign video. That one got a million views.
Next, they posted another AI video, this one focusing on firefighters rescuing baby animals.
23 million views on that one. 78,000 comments, 2.5 million likes.
How much money did they make? It's hard to say exactly, but we can estimate.
Instagram pays people through programs where creators earn money based on how many views their Reels receive. The more viral a video, the longer users stay on the app, which allows Instagram to show more ads. Instagram then passes on some of the profit to the creator. How much? Meta doesn’t publish those numbers, and it varies depending on the audience that is looking at them. But I asked a few influencers, and the recent rate seems to be around $100~$120 per million views. Jason’s reporting shows that Facebook was paid out a few hundred dollars for single viral AI generated images, and Meta has paid out more than $2 billion through programs like Ads on Reels.
Just look at FutureRiderUS’s most popular posts from a roughly 24 hour stretch starting Jan 10:1m + 24m + 6m + 6m + 45m + 4m + 8m ≈ 94 million views.
That’s 94 million views, from typing in some prompts. Conservatively, this is likely worth thousands of dollars. Not a bad day’s work.
After that initial hit, FutureRiderUS started experimenting with different combinations of LA fire-themed AI slop, refining as they go. At first, their focus was on the firefighters holding baby animals. When the views on these started slowing down, the account shifted to videos showing animals running from fire; no firefighters needed. One features a swan bleeding out onto a freeway, with no visible fire at all.
It’s obvious what is going on here: FutureRiderUS realized that animals are a reliable attention-getter, but they’re experimenting to see which combination is the most profitable. They’re A/B testing the AI slop to see what sticks.
When I say ‘slop,’ by the way, I partially mean that in a literal sense. In the most popular video, a man is inexplicably exuding smoke from under his jacket as he cradles an owl. Another firefighter, instead of saving a raccoon, instead appears to nudge it back into the flames. It’s sloppy.
Still, a lot of people seem to be genuinely unaware that these images are fake. Some people can tell, and have commented angrily or jokingly about it. (A third group: people who are initially fooled, but when another commenter tells them it’s fake, they get annoyed, saying that they appreciate seeing images of heroic firefighters or vulnerable animals, even if those specific ones are not real.)
FutureRiderUS has addressed this, sort of.
In the comments section of their most viral post (45 million views) featuring a firefighter carrying two baby bears to safety, they posted a response to angry commenters. Three days after the initial post, they commented, admitting that the post is AI-generated. They said, in part: (emphasis mine):
“In this video, I aimed to shed light on the reality of what is happening. These problems are very real—animals are dying, homes are being destroyed, and firefighters are risking their lives to save others. They don’t have the time to produce visually stunning and powerful footage to raise awareness about these issues. That’s why I took the initiative to create something that could help people see and truly think about these tragedies. […]
Through art, even when created by AI, we can evoke emotions, raise awareness, and inspire change.”
The logic of their argument seems to be this (my paraphrase):If I don't post these provocative images, nobody will care about the brave firefighters or the people who lost their homes.
This sort of defensive, it-doesn’t-matter-if-it’s-fake stance is something that we are starting to notice more, as it’s used to justify the posting (and monetization) of everything from Palestinians to flood victims. But we shouldn’t lose track of the context: the main purpose of this account is to make money. It says so right on the page.
On January 18th, as the fires were still burning, FutureRiderUS posted a Reel advertising their $19.99 course on how to create viral content online by posting AI videos: “Earn $5000 a Month with Viral Videos - Zero Experience Needed - Start Today and Watch Your Life Change.”
To be clear, the man in the video above isn’t FutureRiderUS - the voice and video are AI-generated.
The post also goes on to brag about how FutureRiderUS got 285 million views in one month. The post (as well as another earlier one) points viewers to the link in their bio, which takes you to the course that promises to teach you to replicate FutureRiderUS’ success. The landing page contains screenshots of the viral fire videos as proof of their virality.
From their ad copy:“This proves that anything is possible when you know how to create content that grabs attention.”
Again — this is about creating ‘content that grabs attention’.
Not ‘raising awareness.’ And for the account owner to suggest that they are motivated by something other than money seems disingenuous. There are no donation links, no mention of local organizations. Instead, the only call to action is to click the link to buy their viral video course.
I usually wouldn’t do this, but for journalism’s sake, I bought the course. The course contains two files. The first is a ten-page, wide-spaced PDF that is clearly ChatGPT-generated. If you’re curious, here’s a summary of the main points:
- Look online for what is already trending at the moment
- type that into Sora.ai to generate a similar video
- add music, then post the video online
- repeat multiple times a day
Nothing you couldn’t find online for free or perhaps guess yourself.Really the only unique parts of the guide are two rules it suggests: first, that you should clearly label the post as AI-generated. FutureRiderUS doesn’t seem to follow this rule, but more on that in a moment.
And then, it tells you to not spend too much time on any one video. “Just 30 minutes are enough to move from concept to final upload,” it advises. And to drive home this point, there’s the second file: an .mp4 that is just a screen recording of an iPhone going from prompt to upload in seven minutes.
You’re probably curious about who is behind the account. I was, too. So I asked them some questions via Instagram DM.
Here’s a summary of what they told me: They’re Russian, and they only started doing this in December. OpenAI’s Sora had been released that month, and they got an account and started posting AI videos. Success came pretty immediately. They proudly told me about their high follower count across multiple social media platforms, and how well their guide was selling. As they put it, ‘the results speak for themselves.’
When I started asking about their LA fire videos, they started to get annoyed. I pointed out that most commenters clearly didn’t understand it was AI. The ones who did seemed angry. FutureRiderUS said that they didn’t see the problem, because they had added an ‘AI’ label to their videos.
Here’s the thing: FutureRiderUS is right.
On all of the fake fire videos FutureRiderUS has uploaded, there is an ‘AI Info’ label. Not on the video itself, but in the Instagram interface. The trouble is that the label doesn’t show up when you’re watching the video normally. You can only see it if you tap the ‘See More’ tag, and even then, space is prioritized for the song title, so sometimes the tag is pushed off of the screen.2
I’ve actually already shown an example in this article. Scroll back up and look at the screenshots of the fake burning Hollywood sign that went viral. That tiny ‘A…’ in the bottom right. Did you notice it?
(I made a video that explains this interface part a bit more visually. Jump to 12:00).
Below is the best-case scenario. The left image is what the 43-million view post looks like from the main grid; the center is what it looks like when you’re scrolling — this is where people spend most of their time. The right is what it looks like if you take the trouble to open the text description.
It says ‘AI Info’ in small text on the bottom right. Not ‘AI Warning’, not ‘AI Caution’. Just ‘AI Info.’
Why would you click this?
Meta has a page that makes a big deal about this tag. They primarily show what it looks like in the grid view. The issue is that it’s even more imperceptible there. Have a look at that ‘Los Angeles, California’ tag in the leftmost image above. Up in the top left, under the username. When you first look at the post in the grid, that’s what you see — the location tag. And then, the music title scrolls into view for a few moments. Only after that, the text ‘AI Info” appears. By that time, you’re watching the video, not looking at tiny text scrolls in the upper corner of your screen.
You have to either wait for 5.5 seconds (I timed it) for the ‘AI info’ to appear, or you have to search at the bottom of the screen.3
In our conversation, FutureRiderUS said that it isn’t the poster’s fault if people didn’t notice the interface AI tag: it was Instagram’s responsibility to make the tags bigger or more noticeable.
FutureRiderUS insisted that they are following the rules as written. As I spoke to them, I realized that they were probably correct. But just to make sure, I sent an email to Meta’s press department, asking if simply adding the ‘AI Info’ tag is enough.
I never got a response, but Meta has indicated to 404 Media and to the general public more broadly that it has no problem with this type of content and that it expects to see more AI-generated content on its platforms moving forward.
This all said, there are easy ways to make it clear that your post is not real. Some creators will do this by putting an #AI tag prominently at the beginning of the post, and then writing their caption below.
FutureRiderUS, in their own guide they sell to customers, suggests going further and actually writing it in the post itself. I sent them a quote from their own guide:
"Important: State that this content (or parts of it) is AI-generated (e.g., 'Created with AI' or 'AI-Generated Content')."
(The bolded ‘Important’ is in the original.)
This really seemed to annoy FutureRiderUS, and they accused me of harassment:
"Why should anyone pressure me or force me to do something beyond the established rules? I am following the platform’s guidelines, and anything beyond that crosses a line. This kind of behavior can be considered harassment, as it unfairly targets and imposes additional expectations on me that are not required by the platform."
If it’s not already obvious, everything FutureRiderUS wrote to me, as well as their captions and comments, is being copied and pasted from ChatGPT.
To be clear, I am not really interested in criticizing any one individual here. In the absence of stronger rules on Instagram, this just comes down to a question of ethics. I am free to believe that what FutureRiderUS is doing is not ethical; they are free to disagree, or at least pretend to.
But neither of our opinions matter, because of two facts: fake AI slop is profitable, and there are countless users doing the same thing. There’s absolutely nothing to stop them.
That is: the Instagram platform doesn’t just enable this behavior, it rewards it. So do other platforms. On Instagram and TikTok, FutureRiderUS’s top hits are from fake LA fires; on YouTube, it’s three-hour long Christmas music compilations with slop visuals of families shopping. None are clearly labeled. Disaster porn is just another kind of #content.
It doesn’t really matter what that content is: as long as it is ‘content that grabs attention,’ both sides can make money.
For the slop creator and the platform, this is a clear win-win, at least in the short term. The only loser here is the audience, who is unable to recognize slop when they see it.
There’s this thing that AI proponents like to say every time something new comes out: this is the worst it'll ever be. So far, they've been right, and they may well continue to be right. It’s hard to predict what happens next with AI, but I have one prediction I feel fairly comfortable making: unaided, most of us will always struggle to reliably recognize AI when we see it.
But it’s hard to blame us when two sides are conspiring against us: Instagram’s interface makes it almost impossible to tell, and creators are incentivized to lie by omission.
A few days after their viral successes, the viewcounts on FutureRiderUS’ fire content started to dwindle. The fires themselves were still burning in Los Angeles, but FutureRiderUS shifted focus. On January 14, as talks of a Gaza ceasefire started to circulate, the account anticipated interest in the trend, and made a post — as their viral video guide suggests.
This post is an AI slop collage of a Palestinian flag atop a spire, a woman crying as bodies and rubble lay in the street, children, bloody arms, doctors walking sideways. The caption is a vaguely worded, inoffensive block of text that says, in part, ‘This is not about taking sides, it’s about humanity.’
This move is completely obvious to anyone who follows FutureRiderUS’ viral video guide or honestly, any of the tons of books, articles, LinkedIn posts or videos by content creators who teach you how to make content and grow your account.
Again, it’s all about content. Content that grabs attention. Of course somebody was going to use those tools and strategies for something like this. As long as the platforms allow it and keep making money off of it there's no reason for it to stop.
FutureRiderUS had decided that the algorithmic juice had been squeezed out of Los Angeles. Moving on to Palestine content is just a business decision.
The Palestine post got a few comments, which are of the usual sort: heart emojis, crying emojis, someone musing about a world war. One of the commenters seems completely uninterested in Palestine, ‘humanity,’ or a ceasefire. Instead, they ask the real question:
if i join your courses, what i’ll get from the courses?
But after nearly a day, the Palestine video hadn’t even broken 10k views. It was a flop.
A few hours later, FutureRiderUS posted a video of a bear eating honey.