Civitai robotically tags bounties requesting deepfakes and lists a means for the particular person featured within the content material to manually request its takedown. This method implies that Civitai has a fairly profitable means of figuring out which bounties are for deepfakes, however it’s nonetheless leaving moderation to most people reasonably than carrying it out proactively.
An organization’s authorized legal responsibility for what its customers do isn’t completely clear. Typically, tech firms have broad authorized protections in opposition to such legal responsibility for his or her content material below Part 230 of the Communications Decency Act, however these protections aren’t limitless. For instance, “you can’t knowingly facilitate unlawful transactions in your web site,” says Ryan Calo, a professor specializing in expertise and AI on the College of Washington’s legislation college. (Calo wasn’t concerned on this new research.)
Civitai joined OpenAI, Anthropic, and different AI firms in 2024 in adopting design rules to protect in opposition to the creation and unfold of AI-generated youngster sexual abuse materials . This transfer adopted a 2023 report from the Stanford Web Observatory, which discovered that the overwhelming majority of AI fashions named in youngster sexual abuse communities have been Secure Diffusion–based mostly fashions “predominantly obtained through Civitai.”
However grownup deepfakes haven’t gotten the identical degree of consideration from content material platforms or the enterprise capital companies that fund them. “They don’t seem to be afraid sufficient of it. They’re overly tolerant of it,” Calo says. “Neither legislation enforcement nor civil courts adequately defend in opposition to it. It’s night time and day.”
Civitai obtained a $5 million funding from Andreessen Horowitz (a16z) in November 2023. In a video shared by a16z, Civitai cofounder and CEO Justin Maier described his purpose of constructing the principle place the place individuals discover and share AI fashions for their very own particular person functions. “We’ve aimed to make this house that’s been very, I suppose, area of interest and engineering-heavy increasingly more approachable to increasingly more individuals,” he stated.
Civitai is just not the one firm with a deepfake downside in a16z’s funding portfolio; in February, MIT Know-how Overview first reported that one other firm, Botify AI, was internet hosting AI companions resembling actual actors that acknowledged their age as below 18, engaged in sexually charged conversations, supplied “scorching pictures,” and in some cases described age-of-consent legal guidelines as “arbitrary” and “meant to be damaged.”
