Anthropic Draws a Line With the Department of War — No Mass Surveillance, No Autonomous Weapons

Anthropic CEO Dario Amodei has published a detailed statement outlining the company's position in an escalating standoff with the Department of War over AI safeguards — and it reads like a company that knows it might lose a massive government contract but has decided to draw a line anyway.
What Anthropic Will Do
The statement goes to considerable lengths to establish Anthropic's national security credentials. The company was the first frontier AI firm to deploy models on classified government networks, the first to work with the National Laboratories, and the first to provide custom models for national security customers. Claude is described as "extensively deployed" across the Department of War for intelligence analysis, operational planning, cyber operations, and more.
Anthropic also highlights its decision to cut off firms linked to the Chinese Communist Party — reportedly forgoing several hundred million dollars in revenue — and its work shutting down CCP-sponsored cyberattacks that attempted to abuse Claude.
What Anthropic Won't Do
Despite this track record, Amodei identifies two use cases the company refuses to support:
Mass domestic surveillance. While Anthropic supports lawful foreign intelligence missions, it draws the line at using AI for mass surveillance of American citizens. Amodei argues that current law hasn't caught up with AI's ability to assemble scattered public data into comprehensive profiles of individuals — automatically and at massive scale.
Fully autonomous weapons. Anthropic acknowledges that partially autonomous weapons are "vital to the defense of democracy" and that fully autonomous systems may eventually be necessary. But the company argues that today's frontier AI systems aren't reliable enough to power weapons that take humans entirely out of the loop. Anthropic offered to work with the Department of War on R&D to improve reliability, but says the offer was declined.
The Department of War's Response
According to Amodei, the Department of War has taken an all-or-nothing position. Officials have stated they will only contract with AI companies that agree to "any lawful use" and remove safeguards. The department has threatened to remove Anthropic from its systems, designate the company a "supply chain risk" — a label normally reserved for U.S. adversaries — and invoke the Defense Production Act to force the safeguards' removal.
Amodei pointedly notes that these threats are "inherently contradictory": one labels Anthropic as a security risk while the other labels Claude as essential to national security.
The Bottom Line
Anthropic isn't backing down. The statement ends with a commitment to facilitate a smooth transition if the Department chooses to move to another provider, while keeping its models available on current terms for as long as needed.
This is a significant moment in the relationship between AI companies and the U.S. military. Whether Anthropic's position holds — or whether commercial pressure eventually forces a compromise — will set a precedent for how much say AI companies have over how their technology is used in warfare.