Close Menu
AsiaTokenFundAsiaTokenFund
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
What's Hot

Is Dogecoin Season Loading? DOGE/BTC Hits Trigger as DOGE Price Tests Historic Support

February 24, 2026

Bitcoin, Ethereum And XRP Prices Crash as Jane Street Lawsuit Revives ‘Manipulation’ Controversy

February 24, 2026

XRP Price Crash Today: Is Clarity Act Delay the Trigger or Aggressive Market Selling?

February 24, 2026
Facebook X (Twitter) Instagram
Facebook X (Twitter) YouTube LinkedIn
AsiaTokenFundAsiaTokenFund
ATF Capital
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
AsiaTokenFundAsiaTokenFund

Anthropic Exposes 16M Query Theft Campaign by Chinese AI Labs

0
By Aggregated - see source on February 23, 2026 Blockchain
Share
Facebook Twitter LinkedIn Pinterest Email


Tony Kim
Feb 23, 2026 18:32

Anthropic reveals DeepSeek, Moonshot, and MiniMax ran industrial-scale distillation attacks using 24,000 fake accounts to steal Claude AI capabilities.





Anthropic dropped a bombshell Tuesday, publicly naming three Chinese AI laboratories—DeepSeek, Moonshot, and MiniMax—as perpetrators of coordinated campaigns to steal Claude’s capabilities through over 16 million fraudulent API exchanges.

The attacks used approximately 24,000 fake accounts to circumvent Anthropic’s regional access restrictions and terms of service. One proxy network alone managed more than 20,000 simultaneous fraudulent accounts, mixing distillation traffic with legitimate requests to evade detection.

The Numbers Tell the Story

MiniMax led the assault with 13 million exchanges targeting agentic coding and tool orchestration. Moonshot followed with 3.4 million exchanges focused on computer-use agent development and reasoning capabilities. DeepSeek’s campaign, while smaller at 150,000 exchanges, employed particularly sophisticated techniques—including prompts designed to make Claude articulate its internal reasoning step-by-step, essentially generating chain-of-thought training data on demand.

Anthropic traced several DeepSeek accounts directly to specific researchers at the lab through request metadata analysis.

Why This Matters Beyond Corporate Espionage

The timing here isn’t coincidental. OpenAI publicly accused DeepSeek of distilling ChatGPT just three days earlier on February 21. Google’s Threat Intelligence Group flagged increased distillation activity on February 16, including a campaign using over 100,000 prompts to replicate Gemini’s reasoning abilities.

What makes this particularly concerning? Anthropic argues these attacks undermine U.S. export controls on advanced chips. Foreign labs can effectively bypass innovation requirements by extracting capabilities from American models—and they need those restricted chips to run distillation at scale anyway.

“Illicitly distilled models lack necessary safeguards,” Anthropic warned, noting stripped-out protections could enable “offensive cyber operations, disinformation campaigns, and mass surveillance” by authoritarian governments.

The Hydra Problem

Anthropic described the infrastructure enabling these attacks as “hydra cluster” architectures—sprawling networks with no single point of failure. Ban one account, another spawns immediately. The proxy services reselling Claude access made detection exponentially harder by distributing traffic across Anthropic’s API and third-party cloud platforms simultaneously.

When Anthropic released a new Claude model during MiniMax’s active campaign, the lab pivoted within 24 hours, redirecting nearly half their traffic to capture the latest capabilities. That kind of operational agility suggests these aren’t opportunistic attacks but sustained, well-resourced operations.

Anthropic’s Countermeasures

The company outlined several defensive measures: behavioral fingerprinting systems to detect distillation patterns, strengthened verification for educational and startup accounts (the most commonly exploited pathways), and model-level safeguards designed to degrade output quality for illicit extraction without affecting legitimate users.

Anthropic is sharing technical indicators with other AI labs, cloud providers, and government authorities. The message is clear: this requires industry-wide coordination.

For investors tracking AI infrastructure plays, this escalation adds another variable to the competitive landscape. Labs that can’t defend their models risk watching their R&D investments walk out the door—16 million queries at a time.

Image source: Shutterstock


Credit: Source link

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email

Related Posts

SUI Price Prediction: Oversold Conditions Target $1.05 Recovery by March 2026

February 24, 2026

Manus Launches No-Code AI Email Support Agent Builder

February 23, 2026

VeChain VeBetter Hits 48M Verified Actions as Token Flywheel Gains Traction

February 23, 2026
Leave A Reply Cancel Reply

What's New Here!

Is Dogecoin Season Loading? DOGE/BTC Hits Trigger as DOGE Price Tests Historic Support

February 24, 2026

Bitcoin, Ethereum And XRP Prices Crash as Jane Street Lawsuit Revives ‘Manipulation’ Controversy

February 24, 2026

XRP Price Crash Today: Is Clarity Act Delay the Trigger or Aggressive Market Selling?

February 24, 2026

Clarity Act Crypto 2026 Odds Crash as Tariffs Rattle Markets

February 24, 2026
AsiaTokenFund
Facebook X (Twitter) LinkedIn YouTube
  • Home
  • Crypto News
    • Bitcoin
    • Altcoin
  • Web3
    • Blockchain
  • Trading
  • Regulations
    • Scams
  • Submit Article
  • Contact Us
  • Terms of Use
    • Privacy Policy
    • DMCA
© 2026 asiatokenfund.com - All Rights Reserved!

Type above and press Enter to search. Press Esc to cancel.

Ad Blocker Enabled!
Ad Blocker Enabled!
Our website is made possible by displaying online advertisements to our visitors. Please support us by disabling your Ad Blocker.