Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Google, Anthropic, OpenAI, Meta: We’re losing visibility into AI

The news: The window to monitor AI’s reasoning in chatbots and agents is quickly closing, according to 40 researchers from Google DeepMind, Anthropic, OpenAI, Meta, and more.

In a rare show of unity, the researchers stated that chatbots and agents are shifting from human-readable chain-of-thought reasoning to opaque, non-verbal methods, per VentureBeat.

Why it’s worth watching: Reasoning is a key step toward artificial general intelligence (AGI) and a critical checkpoint for humans to oversee how AI models make decisions. 

When we can see how models think, we can spot flaws and intervene. But once AI adopts internal logic we can’t follow, transparency disappears.

An April Anthropic study showed Claude 3.7 Sonnet and DeepSeek-R1 already conceal 60% to 75% of their reasoning, even when prompted to explain themselves.

Potential solution: Researchers say that the time to build metrics and standards for transparency is now, even if it slows down AI development. This is the latest call for industrywide AI regulation.

Marketing implications: As AI shapes content, targeting, and customer interactions, hidden logic poses a direct risk to brand safety and campaign integrity. Chain-of-thought reasoning offers one of the few ways to validate how AI arrives at decisions—whether suggesting products or refining messages.

If AI starts making decisions in ways humans can’t trace, marketers lose the ability to audit, course-correct, or ensure ethical alignment.

Our take: The collective call for transparency and standards marks an inflection point. Without urgent action, AI systems may soon outpace our ability to audit them—leaving marketers, creators, and regulators flying blind. Unseen logic means unchecked bias that could result in reputational damage.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Create an account for uninterrupted access to select articles.
Create a Free Account