AI5 min readThe Verge AI

Judge sides with Anthropic to temporarily block the Pentagon’s ban

P
Redakcja Pixelift0 views
Share
Judge sides with Anthropic to temporarily block the Pentagon’s ban

Cath Virginia / The Verge, Getty Images

Government retaliation against a technology company for public criticism is a textbook violation of the First Amendment – this was how Judge Rita F. Lin justified a landmark decision to temporarily block the Pentagon in its dispute with Anthropic. The ruling of March 27, 2026, halts the placement of the creators of the Claude model on the list of entities posing a supply chain risk. The conflict erupted when Anthropic refused to allow its AI to be used for mass surveillance and in autonomous lethal weapons, which met with an immediate, punitive reaction from the Department of Defense. The court ruled that the Pentagon exceeded its authority by retaliating against the company for its "defiant stance" in the media and its firm adherence to ethical red lines. For the global technology sector, this is a signal that AI providers can set boundaries on the use of their tools without immediate market exclusion by the state administration. The practical implications of this decision are crucial for Anthropic's business partners – the temporary legal protection allows dozens of companies to continue their cooperation with the provider without fear of legal consequences arising from a government boycott. This case is becoming a foundation for future regulations regarding the autonomy of AI creators in relations with the military sector. The freedom to decide the intended use of one's own technology has just gained a powerful legal precedent.

In the world of technology, the line between ethics and national security is rarely so clear and yet so contentious. On March 27, 2026, Judge Rita F. Lin of the U.S. District Court for the Northern District of California issued a landmark ruling that strikes at the foundations of the Pentagon's current policy toward the AI sector. The court granted Anthropic a preliminary injunction, halting the government ban imposed on the company. This is the first such major blow to an administration that attempted to cut off the creators of the Claude model from federal contracts under the pretext of a supply chain threat.

The conflict that led to this legal battle is not just about money, although the stakes are massive – Anthropic estimates that threatened revenues could range from hundreds of millions to even several billion dollars. At the heart of the dispute are the so-called "red lines" that the company set for the government: a ban on using its technology for mass domestic surveillance and in lethal autonomous weapons systems capable of killing without human involvement in the decision-making process. The Pentagon, represented by the Department of War, deemed these restrictions unacceptable, triggering an avalanche of restrictions.

Retaliation instead of national security

Judge Lin's reasoning is remarkably harsh toward the defense department. The court found that blacklisting Anthropic as a "supply chain risk" did not stem from real technical premises but was a punishment for public criticism of the government's position. Documentation from the Department of War explicitly indicated that the designation occurred because of a "hostile media posture." The judge described this as "classic, unlawful retaliation in violation of the First Amendment" to the U.S. Constitution.

The court's decision will take effect within seven days, giving the company breathing room in an impasse that has lasted for weeks. Although a final verdict may not come for several months, Anthropic's current victory sends a clear signal to Silicon Valley: the government cannot arbitrarily destroy tech companies just because their ethical principles conflict with the vision of military commanders. During the hearings, Judge Lin noted that while the Pentagon has the right to choose its suppliers, going beyond a simple cessation of purchases and attempting to block the company's ability to cooperate with any government contractor is an action that exceeds the law.

Data center and AI infrastructure
The dispute over control of AI models in defense infrastructure is becoming a key point in the security debate.

The price of ethics in the shadow of arms contracts

It all began on January 9, when Secretary of Defense Pete Hegseth issued a memorandum ordering the inclusion of an "any lawful use" clause in all AI service contracts within 180 days. The new guidelines were to cover not only future agreements but also existing ones with giants such as OpenAI, xAI, and Google. Anthropic was the only one to issue a firm veto, refusing to allow Claude to become a tool in the hands of "killer robots."

The Pentagon's reaction was unprecedented. Hegseth announced on social media that no partner working with the military could conduct any commercial activity with Anthropic. This hit the company with a ricochet effect:

  • Dozens of external partners began inquiring about the right to terminate contracts with Anthropic.
  • Companies providing IT services to the military faced the specter of losing contracts simply for using tools like Claude Code.
  • Concerns arose that even suppliers of non-technical products (like toilet paper mentioned in court) could be caught up in the blacklist.
During the hearing, Judge Lin invoked the term "attempted corporate murder," suggesting that the government's actions were aimed not so much at protecting security as at completely paralyzing Anthropic's business.

chatbot-ai.jpeg?quality=90&strip=all&crop=0%2C0%2C100%2C100&w=256" alt="AI chat interface" />
The Claude model has become the center of a dispute over the limits of artificial intelligence autonomy in warfare.

Sabotage or technological sovereignty?

In its line of defense, the Department of Defense put forward an argument about "unacceptable risk to national security." The Pentagon claims that Anthropic could theoretically attempt to remotely disable its technology or change the model's behavior during military operations if it deemed the military was crossing the boundaries set by the company. This accusation of potential sabotage became the foundation for designating the company a supply chain threat.

Judge Lin, however, approached these claims with skepticism, demanding evidence that Anthropic actually retains such control over the model after delivering it to closed government systems. Anthropic representatives, including spokeswoman Danielle Cohen, emphasize that the company's goal is to work on safe and reliable AI, not to fight the government, but the current lawsuit was necessary to protect customers and partners. This clash shows that the era in which Silicon Valley provided tools to the military without asking questions has passed for good.

Anthropic's current success in court is merely the first battle in a war over who controls the "brains" of the modern military. If the court ultimately finds that the government cannot punish companies for their ethical principles, we will witness a permanent fracture in the Pentagon's relationship with the AI industry. The administration will have to choose: either accept the "red lines" of the creators of the most advanced models or be forced to invest in less capable but more compliant systems from suppliers who have no qualms. In a world where technological edge determines the outcome of conflicts, neither of these options seems comfortable for the Pentagon.

Source: The Verge AI
Share

Comments

Loading...