AI9 min readTechCrunch AI

New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

P
Redakcja Pixelift0 views
Share
New court filing reveals Pentagon told Anthropic the two sides were nearly aligned — a week after Trump declared the relationship kaput

Samyukta Lakshmi/Bloomberg / Getty Images

The Pentagon claimed that Anthropic posed an "unacceptable threat to national security," but new court documents reveal that a week before publicly terminating the contract, both parties were close to reaching an agreement. The AI firm filed two sworn declarations in federal court in California, arguing that the government's position is based on a misunderstanding of technical aspects and allegations that never emerged during months of negotiations. The documents accompany Anthropic's lawsuit against the Department of Defense. The case will be heard before Judge Rita Lin in San Francisco on Tuesday, March 24. The conflict illustrates growing tensions between the administration and artificial intelligence companies. For the AI industry, it signals uncertainty in government contracts and potential obstacles to public-private cooperation on security-related projects. The dispute suggests that political decisions may override earlier technical agreements.

A contract worth tens of millions of dollars between the Pentagon and Anthropic turned into a legal battle at a pace that might surprise even seasoned observers of the tech industry. What began as a promising collaboration between a leading artificial intelligence company and the world's largest military procurement office transformed into a conflict filled with accusations of bad faith, technical errors, and — most intriguingly — stark contradictions in government positions. Newly revealed court documents show that just one week before the Trump administration publicly announced that the relationship with Anthropic was "dead," the Pentagon informed the company that both sides were "nearly aligned" in their positions.

This discovery changes the entire narrative surrounding this dispute and addresses one of the key questions of our times: who really controls access to the most advanced AI technologies in the context of national security? Are these technical decisions or political ones? Are these genuine security concerns or a power play between government institutions? Documents filed by Anthropic in federal court in California reveal a far more complicated picture than the one emerging from public statements.

When the Pentagon said "we're almost in agreement" — exactly one week before the breakup

The message contained in the new court documents is chilling in its simplicity: the Pentagon and Anthropic were not as far apart as subsequent public statements suggested. According to testimony filed by Anthropic, Department of Defense employees communicated to the company that both sides' positions were "nearly aligned" — this is not language used when one side considers the other a threat to national security. This is the language of compromise, negotiation, moving toward a solution.

Yet just seven days after this message, the Trump administration publicly announced that the relationship with Anthropic was "kaput" — a word used in statements from official sources. This time window is crucial to understanding what really happened. This was not an evolution of position based on new technical information — the Pentagon did not suddenly discover any new security threats. What changed? Politics. The direction of the wind in the administration changed, decision-makers changed, pressure from influential figures in the new administration changed.

Anthropic argues in its testimony that the Pentagon never raised many of the accusations it now levels at the company. This is crucial — if the Pentagon had genuine technical security concerns, why wait until the litigation phase to reveal them? Why not address these issues during the "months of negotiations" that preceded the conflict? The lack of consistency in communication between the parties suggests that something other than security was driving this change of course.

Technical misunderstandings as a pretext for political maneuvering

One of Anthropic's main claims in the court documents is that the Pentagon's security concerns are based on fundamental technical errors in understanding what the company actually offers and how its AI models function. This is not merely a discussion of technical details — it is an accusation that the government position is built on incorrect assumptions.

Specifically, Anthropic claims that the Pentagon misunderstood the nature and capabilities of the Claude models, as well as potential security risks associated with their use in a military context. The company argues that if the Pentagon actually understood the technical details of what Anthropic proposes, there would be no basis for claiming it constitutes an "unacceptable threat to national security."

This accusation is particularly serious because it points to a deep gap in technical competence among government decision-makers dealing with AI. If the Pentagon truly does not understand the technology it is trying to regulate, it raises questions about the government's ability to make wise decisions in this field. How can one assess security threats if one does not understand the technology? This is a question that will resonate beyond the courtroom, throughout the tech industry and in policy circles concerned with AI security.

Silent concerns and unraised claims — the Pentagon's strategy

Anthropic points out something that may be even more alarming than the Pentagon's change of position: many of the accusations the Pentagon now raises were never brought up during actual negotiations between the parties. This is a key difference between a negotiation process and a legal process. In negotiations, if you have concerns, you raise them. If you have technical problems, you discuss them. If you have reservations about contract terms, you articulate them.

The fact that the Pentagon waited until the litigation phase to raise many of these accusations suggests they may be a strategic maneuver rather than genuine security concerns. In court, you can raise arguments that were never discussed in negotiations because you have access to the full apparatus of the law and the ability to reinterpret facts. In negotiations, you must be more honest and direct — both parties know what is being said.

This distinction is important for understanding the dynamics of this conflict. If the Pentagon had genuine security concerns, it should have raised them earlier. The fact that it did not, and is now raising them, suggests they may be post-hoc generated concerns, tailored to political pressure, rather than genuine threats identified during the normal due diligence process.

Anthropic's position: Security through transparency and collaboration

In its testimony, Anthropic not only defends itself against accusations — the company also articulates a positive vision of what collaboration between AI companies and government should look like on security matters. Rather than resisting regulation and oversight, Anthropic argues that the company actively worked to implement safeguards that would be satisfactory to the Pentagon.

The company claims it was open to negotiations regarding contract terms, that it was ready to collaborate on security issues, and that it actually conducted constructive discussions with the Pentagon over months. This position is important because it positions Anthropic as a company that is not trying to avoid government oversight, but rather working within reasonable security limits.

This contrasts with a possible public perception of the conflict — it might seem like a battle between a tech company seeking independence and a government seeking control. The reality, according to Anthropic's testimony, is more nuanced. The company does not oppose oversight; it opposes accusations it considers baseless and based on a misunderstanding of the technology.

Implications for the AI industry and the future of government contracts

This conflict between Anthropic and the Pentagon matters far beyond the two parties involved. It is being watched by the entire AI industry, as well as governments around the world considering how to regulate and collaborate with artificial intelligence companies. If the Pentagon can withdraw from a contract at the last minute, basing this on accusations that were never raised during negotiations, what message does that send to other companies?

The message is clear: uncertainty. If the government can change its position without warning, based on political shifts rather than technical changes, it is difficult for AI companies to plan long-term collaborations with government agencies. This could lead to a situation where companies are less inclined to engage in government projects because the political risk is too high.

On the other hand, this conflict also reveals the need for better communication between government and the tech industry on AI security issues. If the Pentagon does not understand the technology it is trying to regulate, the problem does not lie with Anthropic — the problem lies in government structure and in the lack of sufficient technical knowledge among decision-makers. This is a problem that will need to be solved if government is to effectively collaborate with AI companies.

Timing and politics: When security decisions become political decisions

The timing of this dispute is crucial to understanding what is really happening. The conflict erupted at a moment when the new Trump administration began taking power and making decisions about technology policy. During this same period, various political forces within the administration had different views on how to approach collaboration between government and AI companies.

Some in the administration may see Anthropic as a company that should be supported — because it is a leading U.S. AI company that could be important for geopolitical competition with China. Others may see any engagement with Anthropic as too risky. These internal tensions may explain the Pentagon's change of position.

What is particularly interesting is that the change of position happened so quickly — just one week after the Pentagon told Anthropic that both sides were "nearly aligned." This is not enough time for a genuine change in the assessment of security threats. This is enough time for a change in political priorities, a change in government personnel, or a change in political pressure from influential figures in the administration.

Hearing before Judge Lin: What happens next?

The hearing scheduled for Tuesday, March 24, before Judge Rita Lin in San Francisco will be a crucial moment in this conflict. It will be the first opportunity for the judge to hear arguments from both sides and make a decision regarding whether the Pentagon had the right to withdraw from the contract in the way it did.

The judge will need to resolve several key issues: Did the Pentagon actually have grounds to withdraw from the contract? Were the security accusations legitimate, or were they a pretext for political maneuvering? Did Anthropic have a right to legal protection in this case? The answers to these questions will matter not only for Anthropic and the Pentagon, but also for the entire AI industry and for the future of relations between government and tech companies.

Regardless of the outcome, this conflict has already changed the landscape of collaboration between government and the AI industry. It has shown that politics can quickly overshadow rational security considerations, that communication between government and tech companies is insufficient, and that better mechanisms are needed to resolve such disputes. The legal battle between Anthropic and the Pentagon is not just a conflict between two institutions — it is a test of the system to see whether it can rationally assess AI security threats in an era when politics and technology are becoming increasingly intertwined.

Source: TechCrunch AI
Share

Comments

Loading...