​Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI 

​Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI 

Anthropic logo on an orange and grey background.

Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model in an attempt to improve their own products. In an announcement on Monday, Anthropic says the “industrial-scale campaigns” involved the creation of around 24,000 fraudulent accounts and more than 16 million exchanges with Claude, as reported earlier by The Wall Street Journal.

The three companies – DeepSeek, MiniMax, and Moonshot – are accused of “distilling” Claude, or training a smaller AI model based on a more advanced one. Though Anthropic says that distillation is a “legitimate training method,” it adds that it can “also be used for illicit purpose …

Read the full story at The Verge.

 

Leave a Reply

Your email address will not be published. Required fields are marked *

You might also like...

​Billions of dollars later and still nobody knows what an Xbox is 

​Billions of dollars later and still nobody knows what an…

​ The last few years of Xbox have been expensive. Under Phil Spencer’s leadership, Microsoft…

​Will Trump’s DOJ actually take on Ticketmaster? 

​Will Trump’s DOJ actually take on Ticketmaster? 

​ In mid-February, the Department of Justice lost its head antitrust enforcer – just weeks…

​Anthropic accuses DeepSeek and other Chinese firms of using Claude to train their AI 

​Anthropic accuses DeepSeek and other Chinese firms of using Claude…

​ Anthropic claims DeepSeek and two other Chinese AI companies misused its Claude AI model…