Latest AI news, expert analysis, bold opinions, and key trends — delivered to your inbox.
The AI rivalry between the U.S. and China just escalated.
Anthropic has accused three Chinese AI labs — DeepSeek, Moonshot AI, and MiniMax — of creating more than 24,000 fake accounts to extract knowledge from its Claude model.
According to Anthropic, the labs generated over 16 million exchanges using a technique known as distillation — a common AI training method used to transfer knowledge from a large model into a smaller one.
But when applied to a competitor’s model, distillation becomes something else:
Competitive cloning.
Anthropic says the activity targeted Claude’s most differentiated capabilities:
Agentic reasoning
Tool use
Coding
The scale varied by company:
DeepSeek allegedly conducted over 150,000 exchanges focused on logic, alignment, and censorship-safe responses.
Moonshot AI reportedly ran more than 3.4 million exchanges aimed at reasoning, coding, data analysis, and computer-use agents.
The accusations come as the U.S. debates stricter enforcement of export controls on advanced AI chips — a policy designed to slow China’s AI development.
Distillation itself isn’t controversial. Labs routinely use it internally to create cheaper, faster models.
The tension arises when it’s used across competitors.
Earlier this month, OpenAI also accused DeepSeek of using distillation to mimic its products.
DeepSeek previously drew global attention with its open-source R1 reasoning model, which approached U.S. frontier performance at significantly lower cost. The company is now preparing to release DeepSeek V4, which reportedly competes strongly in coding benchmarks against both Claude and ChatGPT.
This isn’t just about scraped outputs.
It’s about geopolitical leverage.
As the U.S. tightens chip exports to limit China’s access to cutting-edge compute, accusations of model “mining” suggest another battlefield: data extraction.
If compute is restricted, knowledge transfer becomes the shortcut.
The AI race is no longer just about building better models.
It’s about who controls the infrastructure — and who can replicate it fast enough to catch up.