White House Drafts Workaround for Anthropic AUP Restrictions on National-Security AI Use

White House and Anthropic AI policy conflict illustration

The White House is reportedly drafting an executive guidance document that would let federal agencies route around Anthropic's terms of service for certain national-security workloads, according to leaked memos circulating among AI policy staff. The proposed workaround uses an obscure "essential government function" carve-out that hasn't been invoked since the 1990s.

The trigger: Anthropic's recently updated AUP explicitly bars use of Claude for "lethality assessments, autonomous targeting recommendations, or related kinetic-decision pipelines." Several DoD and intel agencies argue that policy is incompatible with how they want to deploy frontier models internally — and they don't want to negotiate.

What the workaround actually proposes

The draft, marked CUI but circulating widely, leans on a 1996 OSTP precedent that lets agencies override commercial software EULAs when "no functionally equivalent alternative" exists for a designated national-security workload. The mechanism: agencies would self-certify that Anthropic models are uniquely suitable, then deploy them under federal acquisition rules that treat the AUP as unenforceable against the government.

Anthropic, predictably, sees this as the U.S. government picking and choosing which contract terms to honor. The company has reportedly retained outside counsel and is exploring whether to suspend federal API access for any agency that invokes the carve-out.

Why this is escalating now

Multiple agencies are running pilot programs with Claude Sonnet 4.5 and Opus 4.7 under existing GSA contracts. The pilots have been generally successful — strong performance on document analysis, code review, and operational planning. The friction is downstream: agencies want to extend pilots into workflows that brush up against Anthropic's "weapons systems" and "decision support for use of force" prohibitions.

Anthropic's stance has hardened over the last six months. The company explicitly framed its AUP as a "binding commitment to not enable mass-casualty harm" — language that doesn't bend to procurement officers. The White House is now treating that as a national-security obstacle rather than a commercial one.

The OpenAI/Google contrast

OpenAI quietly removed its weapons-related restrictions in early 2025 and now sells directly to defense customers. Google signed a $100M+ Pentagon deal earlier this week despite a 600-employee protest letter. Neither company has the same red lines Anthropic has drawn — and that contrast is the entire reason this fight is happening with Anthropic specifically.

Federal procurement officers privately tell the same story: Anthropic produces the best model for analytical work, but the AUP makes them the hardest vendor to deploy. The workaround is essentially the government saying we want the model and not the policy.

My Take

This is the moment the AI safety conversation gets real. For two years Anthropic has been the lab that takes its policies seriously enough to leave money on the table — that's literally Dario Amodei's pitch to investors and to staff. If the White House successfully overrides those policies, the message to other AI labs is unambiguous: drawing red lines is performative; the government will route around them. The interesting countermove from Anthropic is that they have leverage too — they can refuse to ship updated weights, refuse to extend API access, or quietly degrade federal-tier service. None of that is in the EULA. All of it is in the company's discretion. I'd watch what Anthropic does in the next 30 days, not what the White House drafts.

FAQ

Has the White House actually issued the guidance? Not yet. The current document is a draft circulating for review. Final issuance could be weeks away — or could be quietly shelved if Anthropic and the administration find a backchannel deal.

Can the government legally override a commercial EULA? In limited cases, yes — but never tested at this scale or visibility. Litigation is essentially guaranteed if Anthropic resists.

Why doesn't the government just use OpenAI? They are. The Anthropic-specific fight is because some agencies have already standardized internal workflows on Claude and don't want to migrate.

The Bottom Line

The Biden-era "AI safety dialogue" fiction is officially over. The Trump administration is willing to use 1990s legal precedent to compel a private AI lab to support workloads it has explicitly refused to support. Anthropic's response is the story to watch — not the executive document itself.

Related Articles

Sources