Chinese AI Firms Caught Strip-Mining Claude Model in "Industrial-Scale" Data Heist, Anthropic Claims
Anthropic accused three Chinese artificial intelligence companies of orchestrating coordinated campaigns to extract proprietary information from its Claude model, joining OpenAI in flagging what U.S. firms describe as systematic attempts to replicate American AI capabilities at drastically lower cost.
The San Francisco-based AI company said Monday that DeepSeek, Moonshot AI, and MiniMax flooded Claude with large volumes of specially-crafted prompts designed to train their own models—a process known as "distillation" that allows smaller AI systems to mimic the performance of larger, more expensive ones by extracting knowledge from better-trained models. For finance chiefs navigating AI procurement decisions, the allegations underscore a growing tension: the models they're evaluating may themselves be built on pirated intellectual property from competitors.
Anthropic characterized the activity as "distillation attack" campaigns, suggesting the Chinese firms worked in concert despite the company's service restrictions that explicitly prevent commercial access to Claude in China. According to Anthropic's statement, the three firms circumvented these geographic blocks by engaging commercial proxy services to mask their origins and maintain access.
The technique at issue—distillation—has become particularly attractive for smaller teams with limited resources. Rather than training massive models from scratch (a process that can cost tens of millions of dollars in compute), distillation allows companies to create "student" models that learn from "teacher" models by studying their outputs. The result: AI systems that perform nearly as well as their more sophisticated counterparts, but at a fraction of the development cost.
For U.S. companies like Anthropic and OpenAI, which have invested billions in training frontier models, the concern extends beyond intellectual property theft. These firms worry about ceding competitive advantage to rivals who can reproduce their performance without bearing the enormous upfront costs of model development. If Chinese companies can effectively clone American AI capabilities through distillation, the economic moat that justified massive capital expenditures begins to erode.
The timing of Anthropic's disclosure is notable. OpenAI issued similar complaints recently about Chinese firms engaging in comparable behavior, suggesting the practice may be more widespread than previously understood. The pattern points to what American AI companies view as coordinated industrial espionage rather than isolated incidents.
Yet Anthropic's allegations drew immediate scrutiny online, where observers noted the company has itself used distillation techniques to train proprietary models. The distinction Anthropic appears to draw is between legitimate distillation (using one's own models or licensed data) and what it characterizes as unauthorized extraction from commercial services in violation of terms of service.
The episode raises uncomfortable questions for finance leaders evaluating AI vendors. If distillation can produce comparable results at lower cost, what exactly are enterprises paying for when they license expensive frontier models? And if geographic restrictions can be circumvented through proxy services, how enforceable are the compliance frameworks vendors promise?
What remains unclear from Anthropic's statement is the scale of the alleged extraction, how the company detected the coordinated campaigns, or what remediation measures it has implemented beyond issuing the public accusation. The company did not specify whether it has pursued legal action or engaged U.S. government agencies about the alleged activity.


















Responses (0 )