The Download: AI health tools and the Pentagon’s Anthropic culture war

Foto: MIT Tech Review
As many as 80% of doctors in the USA are already using generative AI tools for clinical documentation, demonstrating the rapid pace of technology adoption in the healthcare sector. While medicine prioritizes efficiency, the Pentagon faces an internal dispute regarding its collaboration with Anthropic. Controversy stems from the fact that the startup, positioning itself as a leader in "safe and ethical artificial intelligence," has made its Claude models available to American defense and intelligence agencies. This decision has sparked a debate in Silicon Valley over the limits of compromise between ethical values and government contracts. For global users and patients, the key information is the development of AI validation standards in healthcare. Organizations such as the Coalition for Health AI (CHAI) are working on a certification system to ensure that algorithms do not replicate racial biases or diagnostic errors. In practice, this means that in the coming years, patients worldwide can expect consultations where the doctor provides full attention while a "digital scribe" operates in the background. This is a signal that AI is ceasing to be merely an experiment and is becoming a foundation of critical state and medical infrastructure, requiring rigorous oversight of its transparency.
The Medical Chatbot Arms Race
The scale of involvement by major tech players in the health sector is unprecedented. Microsoft, leveraging its partnership with OpenAI, is integrating medical functions directly into its cloud ecosystems, aiming to streamline the work of medical personnel. Meanwhile, Amazon is focusing on direct consumer contact, combining AI capabilities with its extensive pharmacy infrastructure and telemedicine services. Each of these players wants to become the patient's first point of contact with the healthcare system. It is worth taking a closer look at the specifics of these solutions. Models such as GPT-4 (powering OpenAI's tools) or dedicated solutions from Google (like Med-PaLM) demonstrate an impressive ability to pass medical exams and analyze complex medical literature. However, the problem arises when moving from theory to clinical practice. Despite their vast knowledge, chatbots still struggle with the problem of so-called hallucinations—generating information that sounds credible but is completely incorrect or even dangerous for the patient. Industry analysis points to a key challenge: personalization. Standard AI models are trained on general datasets, which makes their responses sometimes generic. For AI to actually assist in diagnosis, it must have access to secure, anonymized medical data of a specific patient. This raises further questions about privacy and the security of sensitive data, which is becoming a new currency in the hands of technology corporations.The Pentagon and the Cultural War over Anthropic
While the civilian health sector debates the ethics of algorithms, a completely different battle is taking place behind the scenes of American national defense. The Pentagon is increasingly interested in technologies provided by Anthropic, a startup known for its approach to "constitutional artificial intelligence." The US government's interest in this specific company's technology has become the seed of a kind of cultural war within the tech sector regarding how "value-oriented" AI should be and whose values those should be. For the Pentagon, predictability and security are key. Anthropic promotes an AI development model that has built-in mechanisms to limit harmful algorithmic behavior. However, in a military context, terms like "ethics" or "safety" take on a completely different meaning than in the case of medical chatbots. The dispute concerns whether AI tools should be worldview-neutral or whether they should reflect the specific strategic and political goals of the client. This situation sheds light on a broader problem: the dual-use nature of AI. The same technology that helps a doctor interpret X-ray results faster can be used by the military to optimize decision-making processes on the battlefield. Anthropic, trying to maintain its image as a company that cares about human safety, finds itself in a difficult position, balancing between lucrative government contracts and its original ideals.Technical Challenges and Trust Barriers
The implementation of AI in medicine encounters barriers that cannot be overcome by computing power alone. Key limitations include:- Interpretability (Black Box): Doctors need to know why a system suggests a particular diagnosis. Most LLM models cannot explain the logical path that led to a specific conclusion.
- Training Data Bias: AI models learn from historical data that often contains racial or gender biases, which can lead to inequalities in the quality of medical care.
- Legal Liability: In the event of a medical chatbot error, it is still unclear who bears the blame—the programmer, the company providing the model, or the doctor who used its suggestion.






