-0.9 C
Washington D.C.
Tuesday, February 24, 2026
HomeTechnologyAnthropic Alleges Massive AI Distillation Campaign by Chinese Labs

Anthropic Alleges Massive AI Distillation Campaign by Chinese Labs

The AI Distillation Campaign has intensified tensions in the escalating U.S.–China technology rivalry. American artificial intelligence company Anthropic alleges that three Chinese laboratories orchestrated a large-scale effort to extract sensitive model outputs. The company claims the activity relied on roughly 24,000 fraudulent user accounts. According to investigators, the operation generated more than 16 million interactions with Anthropic’s Claude chatbot.

Anthropic identified the entities as DeepSeek, Moonshot AI, and MiniMax. Company officials assert that the labs coordinated their activity over weeks and months. They allegedly targeted Claude’s most advanced reasoning and coding capabilities. As a result, Anthropic believes the effort aimed to replicate high-value system intelligence.

Jacob Klein, Anthropic’s head of threat intelligence, said analysts detected unusual traffic patterns. Investigators tracked IP correlations, request metadata, and infrastructure indicators inconsistent with legitimate consumer usage. Consequently, the company concluded the activity reflected a deliberate AI Distillation Campaign rather than normal experimentation. Klein stated the scale and persistence of the queries indicated organized extraction attempts.

Distillation refers to training smaller models using outputs from more advanced systems. Frontier Laboratories frequently uses this technique internally to create lighter model versions. However, Anthropic argues unauthorized distillation shortcuts years of reinforcement learning research. Therefore, the company warns that copied systems may lack built-in safety guardrails.

Anthropic further cautioned that stripped-down models could feed military or surveillance applications. The company emphasized risks involving cyber operations, disinformation campaigns, and mass monitoring tools. Nevertheless, it said investigators found no direct evidence linking the Chinese government to coordination efforts. Proxy services that resell access to American AI tools reportedly operate openly in China.

The allegations surfaced as Washington strengthens export controls limiting advanced semiconductor access. Policymakers have focused heavily on restricting high-performance chips used to train frontier models. Yet Klein argued reinforcement learning and output extraction represent a separate vulnerability layer. He stressed that computing power remains critical, but defensive strategies must address broader ecosystem risks.

Anthropic has shared its findings with U.S. government agencies and industry partners. Meanwhile, other firms have reported similar concerns about large-scale distillation practices. OpenAI recently informed lawmakers about alleged extraction attempts involving its ChatGPT system. Additionally, Google reported prompt-based campaigns targeting its Gemini models.

Together, these developments suggest the AI Distillation Campaign represents an emerging battleground. Companies increasingly view repeated model querying as a strategic national security challenge. As competition accelerates, firms and governments now face mounting pressure to protect proprietary artificial intelligence capabilities.

RELATED ARTICLES

Most Popular