OpenAI Restricts Cyber Tool 12 Days After Publicly Criticizing Anthropic for Restricting Mythos

OpenAI restricted access to its Cyber tool — a security-research-focused AI agent — Thursday, citing "potential for misuse" concerns. The move comes 11 days after OpenAI publicly criticized Anthropic for placing similar restrictions on its Mythos research tool, calling such moves "disappointing for the open AI research community." The optics are uncomfortable; the substantive question is what's actually changed.
Cyber is a research-tier OpenAI product launched in February 2026 — an AI agent specifically trained for security research workflows: vulnerability discovery, exploit development, and adversarial testing. The tool is now restricted to verified academic researchers and security firms with established responsible-disclosure programs. General API access has been revoked.
The hypocrisy framing
The timeline is awkward:
April 18: Anthropic restricted Mythos (its security-research tool) to verified researchers, citing concerns about exploit development by malicious actors.
April 19: Greg Brockman publicly criticized the Anthropic restriction, describing open access to security research tools as "essential for the AI safety community" and saying OpenAI "has no plans to restrict similar tools."
April 30 (today): OpenAI restricts Cyber, citing exactly the same concerns Anthropic cited 12 days ago.
The straightforward read: OpenAI didn't have a principled position; they had a marketing position. When the same concerns Anthropic raised affected OpenAI's tool, OpenAI made the same restrictive call. The 12-day gap was either a delayed realization of risk or a deliberate competitive jab that OpenAI can't sustain in production.
What "concerning use" actually means
OpenAI's announcement cites three specific concerns:
Mass-scale vulnerability scanning. Cyber's automated vulnerability discovery, when used at scale by attackers, accelerates exploit development cycles. Defenders' patching cadence can't keep up.
Exploit-development assistance. Cyber can take a discovered vulnerability and help write a working exploit, lowering the barrier from "vulnerability researcher" to "actual attacker."
Targeted social engineering. Cyber's reasoning about specific organizations and individuals enables precision phishing and impersonation attacks at scale.
These are exactly the concerns Anthropic cited about Mythos. Both labs were clearly aware of the risks; both are now restricting access. The substantive question of "should AI labs restrict security tools" appears to have a single right answer in 2026: yes, with verified-researcher carveouts.
The broader context
The dual-use AI security tool problem has been quietly building since 2024. Frontier-model security capabilities have crossed thresholds that matter:
Vulnerability discovery: Frontier models can find non-trivial vulnerabilities in real codebases at scale. Defenders can't outscan attackers using the same tools.
Exploit development: The model-aided exploit-writing process compresses from days to hours.
Phishing personalization: AI-generated spearphishing achieves 5-7x higher click-through rates than non-personalized attacks.
These aren't speculative; they're documented. Anthropic and OpenAI have both run internal red-team exercises showing the capability. The restriction calls follow that data, even if the public framing has been confused.
My Take
OpenAI's restriction is the right call, just embarrassingly inconsistent with their public criticism of Anthropic 12 days ago. The pattern suggests something I've been worried about for a while: the major AI labs are increasingly using "openness" as a marketing position to score points against competitors, with no real intention of being more open in practice. Anthropic has been criticized for being closed; OpenAI markets openness; in actual practice, both are equally restrictive on dual-use security tools. The lesson for the broader AI safety conversation: don't take any AI lab's stated openness commitments at face value. Look at what they actually deploy, not what they say in marketing. The bigger question is whether dual-use security tools should exist at all in commercial form — and the honest answer is probably no, at least not at the capability levels frontier labs are now reaching. Verified-researcher access is a workable middle ground; commercial sale of frontier security tools is increasingly hard to justify.
FAQ
Was Cyber widely used? Roughly 4,000 customers had API access. The restriction primarily affects security-research firms and freelance researchers who had self-attested rather than gone through verification.
Will Anthropic respond to OpenAI's reversal? Anthropic has so far declined to comment publicly on OpenAI's flip. Privately, sources say Anthropic considers the matter resolved.
What about open-source security AI tools? Multiple open-source projects (CyberSecEval, AutoCVE, etc.) provide similar capabilities without restriction. Restricting commercial tools doesn't eliminate the underlying capability.
The Bottom Line
OpenAI restricts Cyber tool 12 days after publicly criticizing Anthropic for restricting Mythos for the same reasons. The restriction is substantively right; the marketing framing was substantively wrong. AI labs' "openness" stances are increasingly competitive marketing rather than principled commitments.