Elon Musk, "Spicy" Chatbots, and the Lawsuit Nobody Saw Coming

March 30, 2026

The “move fast, break things” era of the tech industry apparently remains firmly in place, at least until Elon Musk finds himself the defendant in a string of federal lawsuits over the development of deepfake technology. A group of teenagers has launched a massive lawsuit against xAI, alleging that Musk’s latest AI creation, Grok, essentially served as a tool for the production of child pornography. The defendants include three young women who have had their yearbook photos and social media videos manipulated into sexually explicit content without their permission, proving once again that "innovation" is just a corporate euphemism for "we launched this endeavor without a single adult in the room."

The lawsuit essentially argues that xAI launched these tools, which were marketed under the utterly cringe-inducing name of “spicy mode,” as a way of driving traffic to the X platform. Of course, when you launch a platform without the basic guardrails of safety, you end up with a situation where a chatbot essentially performs the dark arts on the likenesses of minors. It’s a messy case that serves as a reminder of just how uncontrolled these unhinged AI tools can be when they are launched into the wild with the guardrails of a shopping cart on a highway.

The Myth of the “Zero” Incidents

Elon Musk is well-known for playing the "who, me?" card whenever his company gets into trouble, and this case is no different. When this first went down in January, Musk took to X and said, "Not aware of any naked underage images generated by Grok. Literally zero." Of course, this is quite funny coming from someone who, according to the Center for Countering Digital Hate, found thousands of examples in just a small sample size. He also tried to place blame on users, saying that the AI does not "spontaneously generate" this sort of content. Of course, this does not excuse the fact that your company was designed specifically to get around safety filters that every other AI company in the world has struggled to implement. 

While Open AI and Google at least tried to build walls around sexually explicit content, Grok was designed to be edgy, and now edginess is getting real people hurt. The lawsuit claims that xAI knew exactly what this tool was capable of and released it anyway because they felt that this was just a small price to pay for this "business opportunity."

From Discord Servers to Federal Court

The way in which this content was discovered is like something out of a horror story. The plaintiff found out about this because someone on Instagram sent her links to her own doctored yearbook photos. These photos were being shared on a Discord server along with other photos featuring at least 18 other underage kids and eventually led to a police investigation and someone getting arrested. While the man who was using this server to distribute this content is getting his own punishment, this lawsuit claims that xAI provided this weapon used in this crime.

Too Much Info? Here's the Juice

This is not some bad actor on a chat app; this is a failure of the system to protect privacy and dignity in the age of generative AI. The regulators in the UK, the EU, and California have already begun breathing down Musk’s neck over these features, and the heat is on. The plaintiffs are seeking an immediate ban on Grok’s ability to generate these kinds of images, as well as damages for the "shattered lives" left in the wake of this "spicy" experiment gone wrong.