Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

AI’s appetite for data tests Anthropic’s user trust

The news: Anthropic will now require Claude Free, Pro, and Max users to decide whether their conversations can be used to train its AI. The new rules take effect September 28, and business customers remain exempt, per TechCrunch.

Some users on Reddit say the change is making them reconsider Anthropic, citing the five-year data retention requirement as heavy handed.

AI companies are scrambling for training data, often at the expense of infringing copyright. Last year, Nvidia, Apple, Salesforce, and Anthropic scraped thousands of YouTube videos for generative AI (genAI) model training without creators’ knowledge, according to Proof News, per Wired. This year, it seems efforts are focused on free and paid user conversations to train their models.

Why the shift matters: This marks a sharp break from Anthropic’s past policy of deleting chats within 30 days. By setting training to “on” by default, the company puts the burden on users to opt out. 

Users may feel misled if they accept new terms without realizing their data will be stored for five years. Privacy experts warn that default-on toggles and buried notices undermine user choice.

Counterpoint: Without fresh, diverse user conversations, AI models risk stagnation—with accuracy plateaus, persistent biases, and lagging capabilities. Real-world data is vital to reflect the nuances of human language, intent, and behavior.

For consumers, the policy change might feel less like collaboration and more like a forced trade-off: stronger models at the cost of personal data. It also underscores the possibility that training sources are running dry

Key stat: 71% of US adults do not use AI because they are worried about data privacy, per Menlo Ventures. 

Their fears could be justified. In August, OpenAI abruptly discontinued a ChatGPT “share” feature after thousands of unintended private AI chats surfaced in public Google Search results.

Our take: Anthropic says its new policy is intended to empower user choice, but skepticism  over privacy and consent could push users to opt out or seek other alternatives.

As more AI providers prioritize data access over user comfort, transparency and trust will become differentiators in a crowded field. AI’s appetite for training data will continue to push privacy and copyright boundaries. Anthropic’s ability to manage trust will determine whether the policy change aids or undermines adoption.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account