The news: Amid pressure to establish better child safety guardrails in the AI industry, Character AI will block users under 18 from chatting with bots on its platform starting November 25.
- For now, Character AI will evaluate users’ age based on the type of character they choose to chat with and impose a two-hour time limit for those flagged as under 18. 
- After the ban, minors will still be able to generate photos and images—with safety limits in place—and review prior chats.
Why it matters: Rather than limiting engagement time or restricting what users can see—methods used by digital platforms like YouTube and Roblox—Character AI is taking a blanket approach. This change could also alter ad targeting on the platform.
Character AI’s decision could affect advertisers’ ability to reach Gen Alpha, a demographic with high potential for spending and engagement.
“We’re making a very bold step to say for teen users, chatbots are not the way for entertainment, but there are much better ways to serve them,” Character AI CEO Karandeep Anand said, per The New York Times.
Zooming out: This change may protect users’ well-being, but it also preserves brand reputation and addresses impending regulatory demands.
- Regulators are tightening child safety standards, and Character AI is currently facing a lawsuit over a young user who died by suicide after engaging heavily with the platform. 
- California Gov. Gavin Newsom signed a bill this month requiring AI to include safety guardrails on chatbots. That will take effect in January 2026. 
What it means for advertisers: As regulatory and safety concerns rise, advertisers face greater scrutiny when looking to reach younger audiences. CMOs should assess brand alignment with AI platforms and, on platforms that have the potential to pose a safety risk for children, shift efforts to target adult users.