Events & Resources

Learning Center
Read through guides, explore resource hubs, and sample our coverage.
Learn More
Events
Register for an upcoming webinar and track which industry events our analysts attend.
Learn More
Podcasts
Listen to our podcast, Behind the Numbers for the latest news and insights.
Learn More

About

Our Story
Learn more about our mission and how EMARKETER came to be.
Learn More
Our Clients
Key decision-makers share why they find EMARKETER so critical.
Learn More
Our People
Take a look into our corporate culture and view our open roles.
Join the Team
Our Methodology
Rigorous proprietary data vetting strips biases and produces superior insights.
Learn More
Newsroom
See our latest press releases, news articles or download our press kit.
Learn More
Contact Us
Speak to a member of our team to learn more about EMARKETER.
Contact Us

Well-being meets Wild West: OpenAI balances safety oversight with freer AI interactions

The news: OpenAI created an Expert Council on Well-Being and AI—a panel of eight behavioral and mental health specialists tasked to guide how AI tools like ChatGPT and Sora interact with users, per Ars Technica.

This is OpenAI’s latest move to reinforce user safety; in September it strengthened protections for minors by automatically sorting ChatGPT users into two versions: one for adolescents (13-17) and one for adults (18+).

A regulatory and legal imperative: The council will help define healthy AI interactions, a response that likely stems from mounting regulator and parent scrutiny about the technology’s psychological risks.

The urgency is further underscored by a wrongful death lawsuit directly attributing a teen’s suicide to the chatbot, creating an existential liability risk. The council ostensibly provides critical third-party validation and expertise to prevent these risks from happening in the future.

Yes, but: CEO Sam Altman also announced on X that, “now that we have been able to mitigate the serious mental health issues and have new tools, we are going to be able to safely relax the restrictions in most cases.” Altman also said the company plans to “allow even more, like erotica for verified adults.”

A doubled-edged strategy: OpenAI is executing a dual, seemingly contradictory, strategy—fortifying mental health safeguards while simultaneously relaxing content guardrails to expand AI’s use cases.

Advisory councils and wellness frameworks may shape policy, but they lack enforcement power. Without strict moderation or transparent accountability, bad actors can still exploit the system—especially as OpenAI broadens access to more flexible, less restricted tools.

Our take: As companies embed generative AI (genAI), expert-led safety frameworks are becoming table stakes for creative and commercial freedom.

OpenAI’s loosening guardrails shift responsibility to brands and users who must now share in the responsibility to define and enforce their own ethical boundaries.

This content is part of EMARKETER’s subscription Briefings, where we pair daily updates with data and analysis from forecasts and research reports. Our Briefings prepare you to start your day informed, to provide critical insights in an important meeting, and to understand the context of what’s happening in your industry. Non-clients can click here to get a demo of our full platform and coverage.

You've read 0 of 2 free articles this month.

Get more articles - create your free account today!