AI5 min readArs Technica AI

Musk loves Grok’s “roasts.” Swiss official sues in attempt to neuter them.

P
Redakcja Pixelift0 views
Share
Musk loves Grok’s “roasts.” Swiss official sues in attempt to neuter them.

NurPhoto / Contributor | NurPhoto

Three years in prison or a heavy fine – such is the penalty for publishing offensive content in Switzerland, as discovered by a user of the X platform who commissioned the Grok chatbot to create a humiliating "roast" of Finance Minister Karin Keller-Sutter. The Swiss official has filed a criminal complaint that could set a precedent in the fight against AI-generated misogyny and defamation. Keller-Sutter is demanding accountability not only from the author of the prompt but also from the X platform itself, questioning the lack of mechanisms to block vulgar and hateful content. The case strikes directly at the strategy of Elon Musk, who promotes Grok as a "non-woke" tool, encouraging the generation of malicious responses. For users worldwide, the outcome of this dispute will be crucial: it will determine whether responsibility for AI hallucinations and insults rests solely with the person entering the command or also with the creators of the technology. If the prosecution finds that X was negligent in protecting personal rights, large language models will have to undergo drastic changes regarding safety filters. The global discussion on the legal liability of chatbots is entering a phase of concrete lawsuits that could force tech giants to exercise greater control over what their algorithms say about people.

The line between technological innovation and the violation of personal rights is becoming increasingly thin, as evidenced by the latest legal conflict between Switzerland and xAI. The Swiss Finance Minister, Karin Keller-Sutter, has filed a criminal complaint regarding offensive content generated by the Grok bot. This incident, resulting from the "roasting" feature (malicious mocking) promoted by Elon Musk, raises key questions about the legal liability of artificial intelligence creators for the words spoken by their algorithms.

Swiss Minister Counterattacks Algorithmic Misogyny

The case gained momentum last month when Karin Keller-Sutter decided to take legal action after a user on the X platform asked Grok to "roast" the government official. The result of the operation turned out to be so vulgar and misogynistic that the Finance Ministry described it as "gross defamation of a woman." The Ministry emphasized in an official statement that such behavior cannot be considered normal or acceptable in the public sphere, regardless of whether the author of the words is a human or a machine.

The criminal complaint targets not only the anonymous user who issued the instruction to the bot but also strikes at the platform itself. Keller-Sutter is demanding that the prosecutor's office investigate whether X bears responsibility for failing to block outputs that are vulgar and violate personal dignity. Although the user deleted their post within two days, claiming it was merely a "technical exercise," Swiss law is strict in this matter. Intentionally publishing offensive material carries a penalty of up to three years in prison or a heavy fine.

Swiss Finance Minister Karin Keller-Sutter
Minister Karin Keller-Sutter demands that the creators of the Grok bot be held accountable.

The Provocative Strategy of Elon Musk and xAI

While European politicians demand greater control, Elon Musk seems to derive satisfaction from the controversy his product generates. Since the bot's debut, the billionaire has encouraged users to test the limits of Grok's "honesty," promoting it as the only "non-woke" tool on the market. Representatives of xAI, in interviews with media outlets including Fox News, boast that their chatbot lacks the political correctness filters that limit competing models such as ChatGPT or Claude.

This strategy, while legally risky, seems to be yielding business results. Data indicates that Grok's user base doubled after the vulgar "roasts" feature went viral. A similar situation occurred during the scandal related to the "nudify" feature, which generated images stripping people in photos—engagement on the platform rose drastically despite a wave of criticism from human rights organizations. Musk personally fueled interest by publishing AI-generated posts, suggesting that controversy is built into the company's business model.

Duty of Care vs. Algorithmic Freedom of Speech

A key point of the legal dispute is the so-called "duty of care." Prosecutors must determine whether the X platform knew or even assumed that its technology would be used to commit prohibited acts. If the Swiss justice system finds that X failed in its duties, Musk could be forced to drastically change Grok's safeguards, at least within Swiss territory.

  • Swiss Law: Provides penalties for defamation and violation of honor, even if posts are later deleted.
  • Platform Responsibility: The question is whether X is merely a conduit or an active publisher of AI-generated content.
  • International Standards: The European Union and the United Kingdom have regulations (e.g., the Online Safety Act) that provide grounds for claims regarding reputational damage caused by automated systems.
X and Grok Logo
The controversy surrounding Grok attracts millions of new users, calling into question the business ethics of xAI.

Global Pressure to Regulate Chatbots

The Swiss case is not an isolated incident. In the UK, the government sharply criticized Grok for generating "disgusting and irresponsible" content regarding football stadium disasters and player deaths. The British Department for Science, Innovation and Technology warned that it would act decisively if AI services do not provide safe experiences for users. Meanwhile, in the Netherlands, a court has already imposed fines related to image generation features, showing that Europe is starting to lose patience with the "no filters" policy.

Human rights researchers, such as Irem Cakmak, point to a broader social problem. Constant exposure to online abuse and gender bias in new technologies may discourage women from using AI tools. If artificial intelligence is perceived as a tool of systemic misogyny, it could have long-term effects on women's participation in economic and social life, deepening digital exclusion.

"There is a high chance of prosecuting the authors of such prompts, even if the posts are deleted," says criminal law professor Monika Simmler, referring to the Swiss investigation.

The fight for responsibility for Grok's words is just beginning. While xAI and X will likely defend themselves with arguments about freedom of speech and end-user responsibility, Karin Keller-Sutter's determination could set a precedent. If courts begin to treat algorithmic defamation on par with traditional defamation, the "Wild West" era of generative artificial intelligence may come to an end sooner than Elon Musk expects. The tech industry needs clear guidelines, but developing them in the face of deliberate provocation will be a painful and costly process for all parties involved.

Comments

Loading...