Pentagon Labels Anthropic a Supply Chain Risk, Says Claude Would Pollute Defense

Pentagon labels Anthropic Claude AI as defense supply chain risk

The Pentagon has labeled Anthropic, maker of Claude AI, a supply chain risk — the first time an American company has received this designation, which has historically been reserved for foreign adversaries like Chinese and Russian firms. Defense Department CTO Emil Michael said on CNBC that Claude would "pollute" the defense supply chain because Anthropic has "a different policy preference" baked into its AI models.

What Emil Michael Said

"We can't have a company that has a different policy preference that is baked into the model through its constitution, its soul, its policy preferences, pollute the supply chain so our warfighters are getting ineffective weapons, ineffective body armor, ineffective protection," Michael told CNBC's Squawk Box.

The designation will require defense contractors and vendors to certify that they don't use Claude in their work with the Pentagon. Michael insisted the move is "not meant to be punitive" and dismissed Anthropic's claims that the government actively told companies not to use them.

Anthropic Fights Back

Anthropic sued the Trump administration on Monday, calling the government's actions "unprecedented and unlawful." The company said it is being harmed "irreparably" and that hundreds of millions of dollars worth of contracts are in jeopardy. Anthropic was founded in 2021 by researchers who left OpenAI and has published a "constitution" that shapes Claude's behavior around safety, ethics, and helpfulness.

The Irony

Even after being blacklisted, Claude is still being used to support U.S. military operations in Iran. Palantir CEO Alex Karp confirmed his company, a major defense contractor, continues to use Claude. Michael acknowledged the DOD cannot "just rip out" Anthropic's technology overnight and said a transition plan is in place.

The Bottom Line

The Pentagon is essentially saying that an AI model trained to be ethical and safe is a national security risk because those ethical guardrails might interfere with military applications. This is the first time the U.S. government has treated an American AI company the way it typically treats Huawei or Kaspersky. Whether this is a legitimate security concern or political retaliation against a company that built safety into its products is the question everyone should be asking.