The news: 60% of US educators say students are confiding in AI tools instead of teachers, counselors, or parents, according to a recent survey by digital safety company Linewize.
Nearly half (45%) of school leaders have observed students developing emotional attachments to AI chatbots or companions.
Linewize surveyed about 1,000 schools in the US, the UK, Australia, and New Zealand. The US findings come from 353 US school participants including principals, counselors, and teachers.
Why it matters: Although AI chatbot use for mental health support, especially by younger users, is widespread, it’s sparking controversy. Media reports and lawsuits by parents allege that AI chatbots exacerbated mental health issues and led to self-harm, suicides, or violent behavior, raising questions about safety and safeguards.
Some AI platforms are responding with new mental health safeguards even as states—including Illinois, Nevada, and Utah—pass legislation to regulate AI therapy and require human oversight.
Implications for AI platforms and consumers: Regulation and expanded guardrails may help reshape how AI is used for mental health, but it won’t reduce demand. As healthcare costs rise and mental healthcare remains difficult to find and access, AI’s appeal as an always-on, low-cost alternative will likely accelerate in 2026.
For AI platforms, the challenge is balancing broad access with safety. That means building tools that can spot warning signs, setting limits, and directing users to human help when needed without discouraging use among people who rely on the tools.