US Draws Up Strict AI Rules Requiring Companies to Allow Any Lawful Government Use

US Capitol building with AI circuit patterns - government AI contract rules

Trump Administration Wants AI Companies to Drop Their Guardrails for Government Work

The Trump administration is drawing up strict new rules for civilian artificial intelligence contracts that would require AI companies to allow “any lawful” use of their models when working with the federal government, according to the Financial Times. The move comes amid an ongoing standoff between the Pentagon and Anthropic over how the military can use AI technology.

The Anthropic-Pentagon Breakdown

The new rules emerge from a broader battle over AI ethics in government. Anthropic, the maker of Claude, had been negotiating a $200 million contract with the Department of Defense. The deal collapsed after the two sides couldn’t agree on the degree to which the military would get unrestricted access to Anthropic’s AI models.

When those talks fell apart, the Pentagon turned to OpenAI instead. But the story didn’t end there — recent reports from both Bloomberg and the Financial Times indicate that Anthropic CEO Dario Amodei has quietly resumed negotiations with Pentagon official Emil Michael, trying to find a middle ground on acceptable use terms.

What the New Rules Would Mean

The draft rules being developed by the General Services Administration (GSA) — which manages over $116 billion in federal contracts — would fundamentally change the relationship between AI companies and the government. Under the proposed framework, companies selling AI to federal agencies would need to permit any lawful government use of their technology, effectively preventing them from imposing their own acceptable use policies on government buyers.

This is a direct shot at companies like Anthropic that have built their brand on “responsible AI” and maintain usage policies that restrict how their models can be deployed — including limitations on military applications, surveillance, and weapons development.

The Current Procurement Landscape

The federal government doesn’t acquire AI through a single process. When agencies order through the GSA Schedule, they inherit whatever terms GSA negotiated at the master level. Downstream agencies have limited authority to modify those terms. Both OpenAI and Anthropic were named to the GSA Multiple Award Schedule in 2025, offering their tools at pre-negotiated prices — including symbolic $1 deals to get their software into government hands.

Many agencies also acquire AI capabilities as add-on features to existing enterprise software (like Microsoft Copilot or Google Gemini), where AI-specific terms are buried in broader license agreements that are difficult to renegotiate.

The Bottom Line

This is a classic power play. The government is the world’s largest customer, and it’s telling AI companies: either accept our terms or lose the contract. For companies like Anthropic that have staked their reputation on safety guardrails, this creates an impossible choice — compromise on the principles that differentiate you from competitors, or walk away from hundreds of millions in government revenue. OpenAI has already shown a willingness to work with the Pentagon without restrictions. If these rules go into effect, Anthropic will either have to follow suit or watch its competitors eat its lunch on the most lucrative AI contracts on earth.