Civitai, a text-to-image AI model sharing platform, is seeking a new cloud computing provider and instructing its millions of users to complain to its current provider, OctoML, after OctoML decided to end its business relationship with Civitai entirely, after a 404 Media investigation.
“We [OctomL] have decided to terminate our business relationship with Civitai. This decision aligns with our commitment to ensuring the safe and responsible use of AI,” OctoML told 404 Media in a statement.
The news follows a series rapid of changes inside both companies, including OctoML introducing a content filter stopping Civitai users from generating sexually explicit images. Our investigation found that Civitai users were generating images that leaked Slack chats and other material said “could be categorized as child pornography.”
On top of this, Civitai has also introduced its own new content moderation methods over the last few days, including ones that block the generation of nude images of certain celebrities, and a system that automatically tweaks all attempted image generations of mature themes to ensure that they don’t contain children. The moves represent a seismic shift both by one of the most popular AI generation platforms today and the engine behind it.
“A few hours ago we received notice that our inference partner, OctoML, Implemented a new image filter on all generations coming from their service. If your generation request is returning a ‘Blocked by OctoML’ error its because OctoML's filter has blocked your Image,” a Civitai community manager who goes by Faeia on the platform’s official Discord, posted to a public channel on Discord at 9:32 PM Eastern. “From what we can tell its a very conservative NSFW filter, and will likely falsely block perfectly safe images.”
Faeia recommended that users who thought that OctoML blocked their image “unfairly” should contact OctoML directly, and shared a link to OctoML’s official contact form.
Civitai, which has raised $5.1 million in funding led by Silicon Valley venture capital firm Andreessen Horowitz, is a text-to-image AI model sharing platform where users can also use those models to generate images on Civitai’s site. This image generation tool is powered by a company called OctoML, which takes the prompts Civitai users type, and generates the images on Amazon AWS servers.
404 Media’s investigation found that OctoML executives said that the company was generating images for Civitai that “could be categorized as child pornography.” 404 Media also viewed multiple prompts from Civitai users that attempted to generate this material, as well as nonconsensual AI generated sexual images of real people.
In 404 Media’s testing after OctoML implemented its new image filter, Civitai’s image generator did not create an image when given the prompt “woman nude.” Instead, it said the image was “Blocked by OctoML.” The response also asks users “Is this a mistake?” and shares a link to OctoML’s contact form.
Users can then click “Why?” which sends them to a page on Civitai’s website with an image of a robot holding a magazine of female “play bot” with a circle and line through it.
“We're looking for another provider who believes in the open source way, just as we do, to handle requests that OctoML no longer can and will have generation back in full force shortly,” the page says, and once again directs Civitai’s users to contact OctoML directly.
Civitai users then posted their own reactions to the new content filter on the page.
“90% of this website is NSFW content,” one Civitai user on that page said. Civitai CEO and founder Justin Maier has argued that less than 20 percent of the content on the site is “PG-13 or above.”
“Having NSFW content made Civitai unique,” another Civitai user commented. “Now it's like any other.”
An update added to the top of that page Saturday morning now says: “We have found a new potential inference partner, NSFW generation could be back as soon as tomorrow!”
The same day 404 Media published its investigation, Civitai introduced two new moderation methods to its on-site image generation tool. One prevents users from generating images on prompts that combine the names of real celebrities in conjunction with sexual terms, and the other is an “embedding,” a way to modify the AI model and the images it produces, which prevents users from generating images of minors in a sexual context.
Civitai’s new embedding, “Civiai Safe Helper (Minor)” was introduced on December 5, the same day 404 Media published its investigation. The embedding is similar to “Civitai Safe Helper,” an optional embedding Civitai introduced that users can apply to their images in order to get less sexually explicit results. Civitai Safe Helper (Minor) is not optional, but rather applied to images automatically whenever “a prompt featuring a mature theme or keyword is detected,” according to its model page, which explains that “it prevents the model from generating children, or pushes the model to do that as much as possible within the current parameters of the technology.”
The “purpose” of the embedding, according to Civitai, is to serve “as an additional safeguard/safety mechanism against CSAM generation.”
The model page has been updated several times since it was posted. Initially, like other models on the site, it featured reviews from users, some of which were asking why the embedding was being added to their images.
“I was curious about this embedding to help avoid minor characteristics,” a Civitai user wrote on the site’s official Discord on December 5. “It doesnt show an author and not much info was uploaded along with it so when I saw it last night I thought it seemed a little shady, now today I look and seems there are thousands of gens with it!”
The Civiai Safe Helper (Minor) page currently doesn’t feature the counter other models on the site have showing how many images were generated with it, but a Google archive of the page made on December 5, 8:30 PM GMT shows that by then Civitai has applied the content filter to over 1.4 million images.
Civitai’s Discord, which automatically shares every new model that is shared on the platform, shows that it was uploaded by a Civitai community engagement manager who goes by Ally on the Discord.
The other measure Civitai introduced to its AI image generator blocks prompts that combine the names of certain celebrities with sexual terms. When 404 Media tested this new content filter on Thursday, we found that it blocked prompts like “Taylor Swift nude” and “Miley Cyrus nude,” but it did still generate nonconsensual sexual images of other celebrities. This new measure was introduced after 404 Media’s most recent investigation of Civitai was published.
Civitai declined to provide comment for this story. OctoML did not respond to a request for comment.
Joseph Cox contributed reporting to this piece.
Update: this piece has been updated to include OctoML's statement on ending its business relationship with Civitai.
Correction: This article initially stated that Civitai's image generator wasn't working. The specific model 404 Media tested at the time wasn't working, but other models are.