Research5 min readBBC Tech

Judge rejects Pentagon's attempt to 'cripple' Anthropic

P
Redakcja Pixelift0 views
Share
Judge rejects Pentagon's attempt to 'cripple' Anthropic

Foto: BBC Tech

A U.S. federal court has blocked an attempt to impose drastic restrictions on Anthropic, describing the Pentagon's actions as "classic retaliation for exercising free speech." Judge Rita Lin rejected government directives that ordered all state agencies to immediately cease using the company's tools, including the popular Claude model. The dispute erupted when Anthropic refused to sign a $200 million contract without guarantees that its technology would not be used for mass surveillance or the development of fully autonomous weapons. In response, the administration granted the company an unprecedented "supply chain risk" status, previously reserved for entities from hostile nations. The court ruled that the attempt to "cripple" the company was politically motivated rather than merit-based, citing public statements by politicians calling the AI creators "leftist freaks." For the global creative technology and AI market, this is a landmark signal: model creators are gaining real tools to protect themselves against political pressure that could force them to abandon ethical safety barriers. The decision means that Anthropic can continue its cooperation with both the public and commercial sectors without fear of sudden market exclusion. This ruling sets an important precedent in the relationship between the state and Big Tech, protecting the right of developers to decide the boundaries of their own technology's applications.

In the world of technology, where the line between national security and freedom of speech is becoming increasingly thin, an unprecedented clash has occurred between Silicon Valley and the Pentagon. Federal Judge Rita Lin has issued a landmark ruling that temporarily blocks the government administration's attempts to cut off Anthropic from state contracts. This decision not only protects the interests of one of the most important players in the AI market but, above all, sheds light on a dangerous precedent of political pressure on technology creators.

The dispute, which found its conclusion in a California court, directly concerns tools such as Claude – an advanced language model that has become the foundation for the operations of many government agencies and entities cooperating with the military. Judge Lin did not mince words in her ruling, stating that the government's actions were aimed at "crippling Anthropic" and "stifling public debate." This is a reaction to attempts to immediately enforce a ban on using the company's tools, which the court believes bears the hallmarks of retaliation for exercising the right to free speech enshrined in the First Amendment.

Political labeling instead of substantive analysis

The conflict escalated when President Donald Trump and Secretary of Defense Pete Hegseth publicly attacked Anthropic, using rhetoric far removed from technical specifications. In official communications and public statements, administration representatives referred to the company as "woke" and a group of "left-wing lunatics." These phrases became key evidence in the case, suggesting that the government's motivations were not based on real security vulnerabilities, but on ideological misalignment.

Logo of a news service reporting on the legal dispute
The dispute between Anthropic and the Pentagon has become one of the most important topics in the technology industry.

The Pentagon's most controversial move was designating Anthropic as a "supply chain risk". This is an extremely rare designation, historically reserved for companies originating from countries considered adversaries of the United States. For the first time in history, an American enterprise received such a label, which in practice means recognizing its products as dangerous to state infrastructure. However, the court noted that if security were truly the issue, the Department of Defense would have simply stopped using the Claude model instead of deploying such heavy legal and reputational weapons.

The limits of "any lawful use"

At the root of the conflict lie negotiations over the extension of a $200 million contract. The Pentagon demanded the inclusion of a provision stating it could use Anthropic's tools for "any lawful use." Although this sounds standard, Dario Amodei, CEO of Anthropic, saw a dangerous loophole in this phrasing. The company feared that their technology could be used to build fully autonomous weapons systems or mass surveillance of citizens – which contradicts their mission to create safe and ethical artificial intelligence.

The refusal to accept these terms led to a stalemate that became public in February. Secretary Hegseth set a deadline for the company to sign the new agreement, and after it expired, a campaign began to discredit the provider. Anthropic responded with a lawsuit, arguing that the government's actions violate their right to free expression and negatively impact the company's business health. Judge Lin's current ruling allows the company to continue government operations until the full resolution of the trial.

Symbolic representation of AI technology in a legal context
Anthropic's language models will remain in government use until the end of the court battle.

Technological sovereignty called into question

Anthropic's situation is a litmus test for the entire AI industry. If the government can arbitrarily label companies as a "supply chain risk" simply because they do not agree to specific ethical conditions for the use of their models, we face a vision of the nationalization of technological thought. Anthropic, despite being a commercial entity, has positioned itself from the start as a company that prioritizes safety (AI Alignment) over quick profit, which in this case led to a direct clash with the war machine.

It is worth noting several key aspects of this dispute:

  • Freedom of speech vs. code: The court recognized that opposition to how technology is used can be a form of constitutionally protected expression.
  • The label precedent: Using the "supply chain risk" mechanism against a domestic company changes the rules of the game in public procurement.
  • Claude as a standard: Despite the conflict, Anthropic's tools are so deeply embedded in government structures that their immediate removal would be a logistical nightmare.

An Anthropic spokeswoman emphasized that the company's goal remains productive cooperation with the government, but on terms that ensure "safe and reliable AI." This diplomatic approach contrasts with the sharp tone of Judge Lin, who directly pointed to the Pentagon's political motives. The battle for Claude is just the beginning of a broader discussion about who actually controls the most powerful AI models: the engineers who created them, or the politicians who want to harness them to implement defense doctrines.

Anthropic's win in this round is a clear signal that attempts to forcibly impose contract terms using the state apparatus and ideological insults can be effectively stopped by independent courts. The tech industry has gained a moment of respite, but the tension between creators' ethics and military needs will only grow as AI takes over more critical functions in state management. The Trump administration, by pushing Anthropic against the wall, inadvertently triggered a debate that will define the relationship between Big Tech and the government for decades to come.

Source: BBC Tech
Share

Comments

Loading...