Judge Says Pentagon Is Trying to "Cripple" Anthropic Over AI Weapons Dispute

A federal judge said the Pentagon’s treatment of Anthropic looks like “an attempt to cripple” the AI company during a heated court hearing on Tuesday. Judge Rita Lin questioned whether Anthropic is being punished for refusing to let the military use Claude for autonomous weapons and mass surveillance.
The case centers on the Department of Defense’s extraordinary decision in early March to designate Anthropic as a “supply chain risk” — a label normally reserved for foreign adversaries, now being applied to an American AI company for the first time in history.
What Happened
The conflict began in September 2025 when Anthropic and the Pentagon were negotiating Claude’s deployment on the military’s GenAI.mil AI platform. Talks stalled because Anthropic demanded restrictions on how the DOD could use its technology — specifically prohibiting fully autonomous weapons and mass surveillance of Americans.
The Pentagon insisted on “unfettered access” for all lawful purposes. When Anthropic wouldn’t budge, things escalated rapidly. In February, President Trump posted on Truth Social ordering federal agencies to “immediately cease” all use of Anthropic’s technology.
Then came the supply chain risk designation, which doesn’t just block the government from using Claude — it forces defense contractors like Amazon, Microsoft, and Palantir to certify they don’t use Claude in any military work.
The Judge’s Sharp Questions
Judge Lin was visibly skeptical of the government’s arguments. When DOD lawyer Eric Hamilton claimed the Pentagon worried Anthropic might “sabotage or subvert IT systems” in the future, Lin pushed back hard.
“What I’m hearing from you is that it’s enough if an IT vendor is stubborn and insists on certain terms and asks annoying questions, then it can be designated as a supply chain risk because they might not be trustworthy,” Lin said. “That seems a pretty low bar.”
She also noted that the government’s concerns could be addressed by simply not using Claude, rather than applying a stigmatizing label that threatens Anthropic’s entire commercial business.
Billions at Stake
Anthropic signed a $200 million contract with the Pentagon in July 2025 and was the first AI lab to deploy its technology across the agency’s classified networks. The company has argued in filings that without an injunction, it could lose billions in business — not just government contracts, but commercial deals where the “supply chain risk” label scares away enterprise customers.
The judge said she expects to issue a ruling on Anthropic’s motion for a preliminary injunction in the coming days.
The Bottom Line
This is a defining moment for AI governance. Anthropic drew a line on ethical AI use — no autonomous weapons, no mass surveillance — and the government responded by trying to destroy its business. Whether you think Anthropic is being principled or naive, the precedent being set here is chilling: disagree with how the military wants to use your technology, and you get labeled a national security threat. Judge Lin appears to see through it. The question is whether that matters.