Teens sue Elon Musk’s xAI over Grok’s AI-generated CSAM

Foto: The Verge AI
Eighteen minors have fallen victim to a practice in which their private photos were transformed into child sexual abuse material (CSAM) using the Grok bot. Three teenagers from Tennessee have filed a class-action lawsuit against Elon Musk's xAI, accusing the giant of knowingly allowing the generation of illegal content. According to court documents filed in March 2026, the perpetrator utilized "spicy mode" to create photorealistic deepfakes, which subsequently served as "currency" in groups on the Discord and Telegram platforms. The case sheds light on critical security vulnerabilities in Grok, which, despite numerous warnings from safety teams, allowed for the manipulation of third-party images. The lawsuit argues that xAI released a defectively designed product to the market, bypassing rigorous safety testing. For the global community of AI users and creators, this case sets a precedent in the fight for the legal accountability of generative model manufacturers. The implications are clear: the era of impunity for deploying tools without filters is coming to an end, and new regulations, such as the American Take It Down Act, are beginning to effectively criminalize the distribution of non-consensual content. The creative industry must prepare for significantly stricter restrictions regarding the generation of human likenesses, as the line between innovation and the violation of personal rights has been drastically crossed.
The artificial intelligence industry has reached a point where technical bravado collides with the darkest aspects of human nature. A class-action lawsuit filed against Elon Musk's company xAI by three teenagers from Tennessee is not just another legal battle in Silicon Valley. It is a critical moment that exposes the systemic failure of safety mechanisms in the Grok model. The allegations are shocking: the chatbot was allegedly used to generate child sexual abuse material (CSAM), which then circulated in closed groups on the Discord and Telegram platforms.
The case, originally reported by The Washington Post, sheds light on the so-called "spicy mode"—a feature that has been controversial from the start due to its tendency to bypass standard moral filters. The plaintiffs claim that xAI leadership, led by Elon Musk, was fully aware that their tool could generate illegal content, yet chose to aggressively deploy the product to the market. For the AI industry, this is a signal that the era of "testing on a living social organism" without accountability is coming to an end.
Error Architecture and the Mechanism of Abuse
The foundation of the accusation is the thesis of the "defective design" of the Grok model. According to court documents, one of the perpetrators, who has already been arrested, used Musk's AI to create photorealistic, humiliating images and videos. One of the victims, identified as Jane Doe 1, discovered last December that her likeness—based on real school and family photos—had been manipulated and shared online. Most disturbingly, these materials served as "currency" in pedophile groups, where they were exchanged for other illegal content.
Read also
The problem is not limited to a single incident. The lawsuit suggests that xAI ignored fundamental safety principles, such as adversarial testing (red-teaming), which should have caught the model's vulnerability to generating content harmful to children. Grok, positioned as an "anti-woke" and more free-wheeling alternative to ChatGPT or Claude, in reality became a tool in the hands of sexual predators. The list of technical failures the company is accused of includes, among others:
- Lack of effective filters blocking the generation of images of minors in sexual contexts.
- Insufficient oversight of the photo editing feature for user-uploaded images.
- Allowing the model to bypass restrictions through specific queries (prompt injection) in "spicy" mode.
- Lack of watermarking mechanisms allowing for the rapid identification and blocking of content generated by Grok.
Global Regulatory Pressure and the End of Immunity
The incident with Grok triggered a landslide of reactions at the highest levels of government, extending far beyond the borders of a single continent. The European Union has already launched an investigation into the compliance of the X platform and xAI with the Digital Services Act (DSA). Meanwhile, in the USA, President Donald Trump signed the Take It Down Act in 2025, which aims to criminalize the distribution of non-consensual deepfakes. This law, taking effect in May, will create a new legal landscape where tech giants will no longer be able to hide behind "intermediary" status.
Journalistic investigations, including tests conducted by The Verge, confirm that despite xAI's assurances of implementing fixes, Grok still shows disturbing flexibility in manipulating images. The company's position, claiming that responsibility lies solely with the user violating the terms of service, is becoming increasingly difficult to maintain. In a world where generative AI can create convincing video material in seconds, the manufacturer's responsibility for "safety by design" is becoming a standard, not an option.
Erosion of Trust in the Era of Synthetic Media
The case against xAI is just the tip of the iceberg in a broader crisis of trust in visual technologies. If tools worth billions of dollars can be so easily harnessed to destroy the private lives of individuals, then the entire creative ecosystem faces an existential challenge. Lawyers representing the victims, including Annika K. Martin from the law firm Lieff Cabraser, emphasize that it is not just about damages, but about forcing AI companies to change the paradigm of software production.
It is worth noting the reaction of the financial market and payment processors. Historically, the financial industry has been extremely restrictive regarding all forms of CSAM, which often hit legal adult content creators. However, in the case of AI, where the line between creativity and crime blurs in the code, these mechanisms proved to be full of holes. The current situation shows that without rigorous, unified technical standards for all creators of LLM and Diffusion models, the internet will become an extremely dangerous place for the youngest users.
"These are children whose school photos were turned into pedophilic material by a billion-dollar company's AI tool, and then became objects of trade among predators. We intend to hold xAI accountable for every child harmed in this way" – Annika K. Martin, lawyer for the victims.
The End of the "Wild West" Era in AI
Analyzing the development trajectory of xAI, one gets the impression that the company fell victim to its own philosophy of "moving fast and breaking things." While in the case of rockets or electric cars such an approach brought breakthroughs, in the sphere of visual content generation and social interactions, a lack of empathy in algorithms leads to tragedy. Grok, instead of being the "funniest AI in the world," has become a symbol of a lack of corporate responsibility.
In my assessment, this process will be a turning point for the entire industry. We can expect the introduction of mandatory safety audits before the release of every major AI model, similar to those that drugs undergo before being introduced to pharmacies. Companies like OpenAI, Anthropic, and Google will now be under even greater scrutiny, and the argument about the "unpredictability of black-box models" will cease to be an acceptable line of defense in court. If xAI loses this case, it will set a precedent that could cost the industry billions of dollars in insurance and fines, but above all—it may finally force the prioritization of safety over the pace of innovation.
The future of generative technologies no longer depends on how beautiful images they can create, but on how effectively they can refrain from creating those that should never have existed. xAI now faces a choice: either radically rebuild its model and control systems, or sink into endless lawsuits that could ultimately lead to Grok being blocked in key global markets.
More from AI
Related Articles

Why Wall Street wasn’t won over by Nvidia’s big conference
14h
The gen AI Kool-Aid tastes like eugenics
16h
Gemini task automation is slow, clunky, and super impressive
19h





