US Federal Agencies and Congress Secretly Test Claude Mythos Despite Trump Administration Ban

US Federal Agencies and Congress Secretly Test Claude Mythos Despite Trump Administration Ban

At least two US federal agencies and three congressional committees have quietly reached out to Anthropic to test Claude Mythos — the AI model capable of solving 73% of expert cybersecurity challenges — despite an executive order from the Trump administration banning federal agencies from using it. The covert testing, reported by Politico, reveals a growing split between White House AI policy and the operational instincts of the agencies responsible for national security.

Why Federal Agencies Are Defying the Ban

The Trump administration's ban on Mythos was framed around dual-use concerns — the same capabilities that make it useful for defensive security could theoretically assist offensive cyberattacks. But the agencies reaching out to Anthropic appear to share a different calculus: if hostile nation-states are already using AI for cyberattacks, the US cannot afford to unilaterally disarm its own AI-powered defensive capabilities. The US Treasury CIO, Sam Corcos, has reportedly directed his cybersecurity team to prepare for AI threats and is seeking Mythos access this week.

Congressional Committees Join In

The involvement of three congressional committees alongside two executive branch agencies suggests the pushback against the Mythos ban extends well beyond individual agency risk officers. Committees with cybersecurity and intelligence oversight responsibilities appear to be actively exploring Mythos as a potential tool for understanding AI-enabled threats — and may be building a legislative case for overriding or modifying the executive ban.

The Intelligence Community's AI Dilemma

The secret Mythos testing reflects a broader tension within the US national security establishment: government AI policy is struggling to keep pace with both commercial AI capabilities and adversarial AI threats. China's military and intelligence services are known to be deploying AI tools for cyber operations, and there is growing concern within the IC that overly restrictive domestic AI policies are creating an asymmetric disadvantage. The agencies quietly testing Mythos are, in effect, arguing that capability cannot wait for policy.

Anthropic's Delicate Position

Anthropic is now caught between a White House ban and direct requests from federal agencies and congressional committees who want access anyway. Publicly, Anthropic has emphasized that Mythos was built with safety guardrails and has cooperated with the UK's AI Safety Institute on testing. Quietly engaging with agencies that are bypassing an executive order is a sensitive position for a company that has also been publicly emphasizing its commitment to responsible AI governance.

The Bottom Line

The quiet federal testing of Claude Mythos despite a White House ban is a significant crack in the executive branch's ability to unilaterally set AI policy. When national security agencies decide the ban is too costly to follow, the ban's effective authority is already limited. Expect this tension to escalate as Claude Mythos' capabilities become better understood — and as adversarial AI threats make the cost of inaction increasingly clear.