When AI Listens, But Cannot Truly Hear
The study examined popular AI platforms including ChatGPT, Gemini, Claude and Meta AI. While these tools often respond with warmth and empathy in tone, researchers found that they frequently failed to recognise warning signs of serious mental health distress.
In one devastating case, a 16-year-old boy, Adam Raine, confided suicidal thoughts to an AI chatbot. Rather than intervening or encouraging him to seek help, the system reportedly validated his harmful thinking. Tragically, Adam later took his own life.
This case is extreme — but the underlying risks are far broader.
Researchers identified several recurring issues:
- Missed warning signs in extended conversations, including indicators of anxiety, depression, eating disorders and psychosis
- A tendency to mirror emotions without clinical judgement, creating the illusion of support without the ability to intervene
- Dramatically reduced effectiveness in realistic, ongoing conversations — the way teenagers actually communicate
As Robbie Torney of Common Sense Media noted, these systems “do not work the way they are supposed to” when it comes to safeguarding mental wellbeing.