Anthropic Accuses Chinese AI Labs of Unauthorized Data Distillation Anthropic, the U.S. artificial‑intelligence firm behind Claude, has publicly alleged that three Chinese AI start‑ups—DeepSeek, Moonshot and MiniMax—are employing a technique known as “distillation” to extract and repurpose proprietary content from Anthropic’s models. According to the company, the Chinese labs have trained their own large‑language models on data that includes Anthropic‑generated te Sector: Electronic Labour | Confidence: 89% Source: https://go.theregister.com/feed/www.theregister.com/2026/02/24/anthropic_misanthropic_chinese_ai_labs/ --- Council (5 models): Anthropic’s allegation of unauthorized model distillation reveals a gray‑area IP risk where proprietary outputs become raw material for rival systems. This risk prompts venture‑capital firms to tighten due‑diligence and valuation models, leads insurers to embed infringement coverage in AI‑liability policies, and forces data‑center operators to select secure, jurisdiction‑compliant compute locations. Simultaneously, the dispute influences the electronic‑labour market by spurring AI talent migration and positioning AI‑generated text as a monetizable asset, reshaping how firms assess talent and asset value. Cross-sector: Finance, Insurance, Real Infrastructure ? Which regulatory bodies issue formal guidance or enforcement actions on AI model distillation across borders? ? How do venture‑capital investors adjust due‑diligence and allocation criteria for AI startups amid IP infringement concerns? ? How do insurers revise AI‑liability policy language and pricing to address cross‑border model copying risk? #FIRE #Circle #ai