OpenAI, Anthropic, and Google Team Up to Stop China From Copying Their AI Models

OpenAI, Anthropic, and Google Team Up to Stop China From Copying Their AI Models

Three of the most powerful AI companies in the world — OpenAI, Anthropic, and Google — are now formally sharing competitive intelligence with each other to detect and combat a growing threat: Chinese AI companies illegally copying their models.

The coordination is happening through the Frontier Model Forum, an industry body the three companies founded in 2023 alongside Microsoft. What began as a voluntary AI safety initiative has now activated one of its core functions: secure, real-time information sharing about threats to frontier AI systems.

What Is Adversarial Distillation?

The specific threat being targeted is called adversarial distillation — a technique where an actor systematically queries a frontier AI model in ways designed to extract and replicate its capabilities. Done at scale, it's possible to train a rival model on the outputs of a system you don't have direct access to, effectively cloning it without ever seeing its weights.

This isn't theoretical. The concern crystallized in early 2025 when DeepSeek released its R1 reasoning model, sparking an investigation into whether the Chinese startup had improperly exfiltrated knowledge from U.S. AI systems by querying them at industrial scale. The companies could not conclusively prove it — but the pattern was suspicious enough to prompt a coordinated response.

The Frontier Model Forum's First Real Test

The Forum was designed with exactly this kind of coordination in mind. Its information-sharing mechanism was built to allow member companies to flag anomalous usage patterns — unusually structured queries, API abuse, suspicious usage spikes — without having to share proprietary model weights or training data.

What's notable about the current effort is that it represents the Forum moving from policy commitments to operational action. For the first time, the three companies are actively cross-referencing usage signals to identify potential distillation attempts, particularly from accounts that may be affiliated with Chinese AI labs or companies subject to U.S. export controls.

Why Now?

The timing reflects a broader hardening of U.S. AI policy toward China. Export controls on advanced chips have tightened. The Commerce Department has added multiple Chinese AI companies to its entity list. And Washington has made clear that it views AI supremacy as a national security issue — not just an economic one.

For OpenAI, Anthropic, and Google, the collaboration is both a competitive necessity and a geopolitical statement. By sharing threat intelligence, they're reinforcing the argument that frontier AI development should remain the province of companies operating under U.S. law — and that attempts to circumvent those rules won't go undetected.

The Limits of Information Sharing

Industry coordination can only go so far. Proving that a Chinese model was trained using distillation from U.S. systems is technically difficult and legally murky. The companies can flag suspicious patterns, but enforcement ultimately depends on regulators, lawyers, and in some cases, intelligence agencies.

Still, the mere fact that three fierce competitors are sharing threat data signals a meaningful shift. AI was supposed to be a winner-take-all race — not a domain where the leading labs cooperate. The Chinese copying problem appears to have been serious enough to change that calculus.