🧠 BREAKING: U.S. AI safety firm Anthropic says multiple Chinese AI companies, including DeepSeek, Moonshot AI, and MiniMax, ran industrial-scale “distillation” campaigns on its Claude model — generating millions of interactions via ~24,000 fraudulent accounts to extract capabilities for their own models.
🔎 What Anthropic Alleges
The operations involved generating over 16 million exchanges with Claude to illicitly “distill” its advanced reasoning, coding, and tool-use capabilities.
These were unauthorized and violated Anthropic’s terms, according to the company.
Anthropic says it traced the campaigns with “high confidence” using IP, metadata, and infrastructure signals.
The three labs are accused of using proxy services and fake accounts to evade access restrictions.
🧩 What “Distillation” Means Here
Distillation is a legitimate technique where a smaller model is trained on outputs from a larger one. But Anthropic claims the campaigns weren’t benign — instead seeking to shortcut years of research.
It’s a growing flashpoint in the AI race, where access controls and IP protection are increasingly strained.
🛰️ Geopolitical & Security Context
Anthropic does not commercially offer Claude in China and says it restricts access globally for Chinese-owned firms for national security reasons.
Beyond commercial rivalry, the company warns that distilled models lacking U.S. safety guardrails could be repurposed for surveillance, cyber operations, or disinformation tools.
🪪 Reactions So Far
None of the named Chinese firms have publicly responded to the allegations.
This follows similar claims by other U.S. AI labs that Chinese players have sought to replicate capabilities by training on Western model outputs.
#Anthropic #DeepSeek #ClaudeAI #AIRace #ArtificialIntelligence