Anthropic Excluded EU Regulators from Claude Mythos Oversight — Governance Gap Widens

When Anthropic restricted access to its most powerful model — Claude Mythos Preview — to a curated coalition of approximately 50 defensive security organizations, it made a quiet but consequential choice: European regulators were largely left out. The UK's AI Security Institute conducted an independent evaluation and published its findings, but the major EU bodies that have been building the world's most comprehensive AI regulatory framework got minimal advance access or oversight.
What European Regulators Are Saying
The exclusion has drawn criticism from officials at the European AI Office, the body established under the EU AI Act to oversee high-risk AI systems. The concern is structural: if frontier AI labs can restrict access to powerful models to a self-selected group — even for ostensibly safety-focused reasons — it undermines the ability of democratically accountable regulators to independently verify those claims.
Anthropic's stated rationale was precisely that the model is too dangerous for general release, which makes the governance gap even more pointed. A model restricted due to offensive cyber capabilities should, by that logic, be subject to more regulatory scrutiny, not less.
The UK vs. EU Split
The asymmetry between UK and EU oversight is notable. The UK's AISI was integrated into the evaluation process, published detailed findings, and has a framework for ongoing collaboration with Anthropic. The EU's AI Office, despite having legal authority under the AI Act for high-risk AI systems, was largely operating from public disclosures rather than direct access.
This is partly a jurisdictional issue — Anthropic is a US company, the UK has a bilateral AI safety agreement, and EU enforcement of the AI Act is still being operationalized. But the optics are poor for a company that markets itself on responsible AI development.
The Governance Question the Industry Needs to Answer
Anthropic's Mythos decision crystallizes a tension that applies to the entire frontier AI industry: when a lab decides a model is too dangerous for public release, who has the authority to verify that assessment? Right now, the answer is essentially: whoever the lab chooses to tell. That is not a governance framework — it is voluntary disclosure with safety branding.
The EU AI Act is designed to change that, but enforcement is still being built. In the interim, labs are making consequential access decisions with minimal external accountability.
The Bottom Line
The EU's exclusion from Mythos oversight is not a scandal — it is a symptom of how far governance infrastructure lags behind AI capability. Anthropic made a defensible product decision and involved the regulators it could. But as AI models cross into territory where they can autonomously execute corporate network intrusions, "we showed the UK" is not a sufficient answer for the body of law that governs 450 million people.