The Three Biggest AI Labs Just Agreed on Something. That Should Make You Think.

The Three Biggest AI Labs Just Agreed on Something. That Should Make You Think.

Enterprise AI  /  Governance

Three companies that compete for the same customers and the same talent just decided to cooperate. The reason tells you something important about how the AI race is actually being run.

Shashi Bellamkonda  ·  April 8, 2026  ·  shashi.co
3
Labs coordinating
1
Non-profit vehicle
0
Prior precedent

In February, Anthropic published a report naming three Chinese AI companies, DeepSeek, Moonshot AI, and MiniMax, as having run over 16 million automated exchanges through 24,000 fake accounts to extract Claude's capabilities. Anthropic blocked them. Then this week, Bloomberg reported that OpenAI, Google, and Anthropic are now sharing intelligence with each other to catch the same kind of attack happening across all three platforms. That is the part worth sitting with. These three companies do not share much of anything. They compete for the same enterprise budgets and recruit from the same talent pools. Getting them to coordinate took something that hurt all of them.

The vehicle is the Frontier Model Forum, a non-profit they co-founded with Microsoft in 2023. Until recently its public output was mostly policy statements and voluntary safety commitments. It is now being used to pass actual attack data between competitors. When one lab detects an adversarial pattern, it flags it for the others.

Copying a model does not require breaking in

Distillation is a standard technique in AI development. A large model, the teacher, generates outputs that a smaller model, the student, learns to mimic. Labs do this internally all the time to make faster, cheaper versions of their own systems. Legal, widely used, uncontroversial.

What changed is the adversarial version. You do not need the model weights to copy a model's behavior. You query the public API at massive scale, collect the outputs, and train your own system on them. No break-in required. You just have to ask enough questions. U.S. officials estimate this costs Silicon Valley labs billions in lost profit each year, which is a number large enough to make competitors cooperate.

You do not need to steal a model to copy it. You just need to ask it enough questions.

The legal path here is weak, so the political one is doing the work

AI model outputs cannot be copyrighted under current U.S. law. Adversarial distillation is a terms-of-service violation, which means the enforcement options are things like account banning, IP blocking, and output format changes designed to degrade scraped training data. Anthropic has gone furthest, banning all Chinese-controlled companies from Claude access outright.

The Trump administration's AI Action Plan has called for a formal industry information-sharing center to tackle exactly this, which is part of what the Frontier Model Forum is moving toward. The awkward part, which Bloomberg flags, is antitrust. When three companies that collectively dominate an industry start trading notes on competitors, regulators notice. The labs reportedly want legal clarity before they formalize the arrangement further.

Two things enterprise buyers should actually care about

Most organizations are not building foundation models. They are choosing which ones to build on. From that position, this story feels distant. It is not, for two reasons.

First, tighter distillation defenses mean tighter API controls generally. If you run high-volume API workloads against any of these platforms, expect more scrutiny on usage patterns. The countermeasures being built to catch adversarial scrapers will catch legitimate high-volume users in the same net.

Second, Anthropic has been specific about what distilled models lose in the copying process: the safety guardrails. A distilled Claude is not a cheaper Claude with the same behavior. It is a Claude with the constraints removed. If any of your vendors or partners are running AI systems they describe as "inspired by" or "fine-tuned from" frontier models, that provenance question is worth asking directly.

A safety organization is now mainly protecting market share

The Frontier Model Forum launched in 2023 around AI safety coordination. That framing made sense then. The new use, sharing distillation attack data among competitors, is primarily about protecting the commercial value of frontier models. Those two things can coexist, and there is a real safety argument for keeping guardrails intact across distilled systems.

But the next time a lab invokes the Forum in a policy context as a safety body, it is fair to remember that its most active operational work right now is competitive defense. That does not disqualify the safety mission. It does complicate the framing.

The Question for Technology Leaders

If the model your enterprise depends on was partially built by studying a competitor's outputs, does that change your vendor risk calculation? And if three of the largest AI labs in the world need a non-profit coordination structure to defend their own intellectual property, what does that tell you about how well-understood the provenance of any frontier model actually is?

Sources
  1. Ghaffary, Shirin, and Maggie Eastland. "OpenAI, Anthropic, Google Unite to Combat Model Copying in China." Bloomberg, 6 Apr. 2026. bloomberg.com
  2. Frontier Model Forum. "About." frontiermodelforum.org
  3. Anthropic. "Protecting Claude: Adversarial Distillation Report." Feb. 2026. anthropic.com
  4. OpenAI. Memo to the House Select Committee on China. Feb. 2026. openai.com

shashi.co  ·  Strategy & Technology Analysis

Disclaimer: This blog reflects my personal views only. Content does not represent the views of my employer, Info-Tech Research Group. AI tools may have been used for brevity, structure, or research support. Please independently verify any information before relying on it.