Meta told to pay $375m for misleading users over child safety

Foto: BBC Tech
As many as 16% of Instagram users reported exposure to unsolicited nudity or sexual content in a single week—this is one of the staggering figures that led to a historic verdict in New Mexico. A jury ordered tech giant Meta to pay $375 million in damages for misleading the public regarding child safety. The court found Mark Zuckerberg’s company guilty of violating unfair trade practices regulations, stating that the Facebook and Instagram platforms knowingly exposed minors to sexual predators and harmful materials. Key evidence in the seven-week trial included testimony from whistleblower Arturo Béjar, a former Meta engineer, and internal company documents confirming that management was aware of the scale of the problem. Although Meta plans to appeal and is promoting new features such as Teen Accounts, this verdict is a game-changer for users worldwide. It marks the end of the era of impunity for recommendation algorithms that automatically suggested dangerous content to the youngest users. Globally, it sends a clear signal to Big Tech: the protection of minors cannot be merely a marketing slogan, and failures in moderation will cost billions of dollars in an upcoming wave of similar lawsuits. Meta must now prove that child safety is a genuine priority, rather than just a reaction to an image crisis.
A court ruling in New Mexico strikes at the brand foundations of the Menlo Park giant. A jury has found that Meta — the owner of powerhouses such as Facebook, Instagram, and WhatsApp — is guilty of misleading users regarding child safety on its platforms. The financial penalty amounts to 375 million dollars (approximately 279 million pounds), representing one of the most severe blows to content moderation policy and minor protection in the history of social media.
This decision is not merely a matter of a high fine, but primarily a legal precedent that could open the door to further lawsuits worldwide. New Mexico Attorney General Raul Torrez described the verdict as "historic," emphasizing that for the first time, a state has successfully sued Mark Zuckerberg's corporation in an area as sensitive as the safety of the youngest internet users.
Recommendation algorithms under investigators' scrutiny
The key argument of the prosecution that convinced the jury was the way recommendation algorithms function. According to trial documentation, these tools, instead of protecting, actively "steered" young users toward sexual content, materials depicting child exploitation, and even exposed them to contact with sexual predators and human traffickers. The automatic content selection mechanisms, which constitute the business strength of Instagram, became evidence against the company in this case.
Read also

During the seven-week trial, internal company documents were revealed that cast a shadow over the corporation's official narrative. Prosecutors demonstrated that Meta violated the state's Unfair Practices Act. The total penalty amount stems from thousands of individual violations of this act, each valued at the maximum rate of 5,000 dollars. This is a clear signal that courts are ceasing to treat moderation errors as incidental stumbles and are beginning to view them as a systemic product flaw.
Whistleblower testimony and the dark side of Instagram
An extremely significant element of the trial was the testimony of Arturo Béjar, a former engineering leader at Meta who left the company in 2021. Béjar, appearing as a whistleblower, presented the results of his own experiments conducted on Instagram. They showed that underage users were regularly bombarded with sexualized content. Furthermore, the engineer shared a personal, dramatic story — his own daughter received sexual solicitations from a stranger via the platform.
- 16% of Instagram users reported receiving unwanted content containing nudity or sexual activity within just one week.
- Internal company research confirmed the scale of the problem; however, employee warnings were ignored by management.
- The prosecution charged Meta leadership with knowing about the dangers and yet lying to the public about the real level of safety.
The company is defending itself against these charges by pointing to implemented innovations. Meta representatives emphasize that in 2024, Teen Accounts were introduced, giving youth more control over their digital environment, and a feature was recently launched to notify parents when a child searches for content related to self-harm. Nevertheless, a company spokesperson announced an appeal against the verdict, stating that the corporation remains confident in its achievements in the field of protecting teens online.
The beginning of an avalanche of compensation lawsuits
The ruling in New Mexico is just the tip of the iceberg of legal problems for Zuckerberg's empire. Concurrently, a trial is underway in Los Angeles brought by a young woman who claims she was intentionally addicted to platforms such as Instagram and Google-owned YouTube. Her line of attack focuses on the very architecture of these services, designed to maximize screen time at the expense of user mental health.

Thousands of similar cases are currently pending in American courts. Attorney General Torrez's argument that "Meta executives knew their products harmed children, disregarded warnings, and lied" is becoming a common denominator for many accusations. The tech industry faces a critical moment — the era of impunity for algorithms that generate engagement at any cost seems to be coming to an end under the pressure of hard evidence from within the corporations themselves.
From an analytical perspective, this verdict changes the rules of the game in the creative technology and social media sector. Meta will be forced not only to pay a massive penalty but, above all, to fundamentally revise its recommendation systems if it wants to avoid further multi-million dollar damages. The New Mexico case proved that internal research and employee warnings, once ignored, can become the most powerful weapon in the hands of prosecutors, and "user safety" must stop being merely a marketing slogan to become a measurable technical parameter of the product.






