The news: Anthropic will now require Claude Free, Pro, and Max users to decide whether their conversations can be used to train its AI. The new rules take effect September 28, and business customers remain exempt, per TechCrunch.
Some users on Reddit say the change is making them reconsider Anthropic, citing the five-year data retention requirement as heavy handed.
AI companies are scrambling for training data, often at the expense of infringing copyright. Last year, Nvidia, Apple, Salesforce, and Anthropic scraped thousands of YouTube videos for generative AI (genAI) model training without creators’ knowledge, according to Proof News, per Wired. This year, it seems efforts are focused on free and paid user conversations to train their models.
Why the shift matters: This marks a sharp break from Anthropic’s past policy of deleting chats within 30 days. By setting training to “on” by default, the company puts the burden on users to opt out.
Users may feel misled if they accept new terms without realizing their data will be stored for five years. Privacy experts warn that default-on toggles and buried notices undermine user choice.
Counterpoint: Without fresh, diverse user conversations, AI models risk stagnation—with accuracy plateaus, persistent biases, and lagging capabilities. Real-world data is vital to reflect the nuances of human language, intent, and behavior.
For consumers, the policy change might feel less like collaboration and more like a forced trade-off: stronger models at the cost of personal data. It also underscores the possibility that training sources are running dry.
Key stat: 71% of US adults do not use AI because they are worried about data privacy, per Menlo Ventures.
Their fears could be justified. In August, OpenAI abruptly discontinued a ChatGPT “share” feature after thousands of unintended private AI chats surfaced in public Google Search results.
Our take: Anthropic says its new policy is intended to empower user choice, but skepticism over privacy and consent could push users to opt out or seek other alternatives.
As more AI providers prioritize data access over user comfort, transparency and trust will become differentiators in a crowded field. AI’s appetite for training data will continue to push privacy and copyright boundaries. Anthropic’s ability to manage trust will determine whether the policy change aids or undermines adoption.