
Nadia Shaw
Picture this: Your teen, midmeltdown, seeks solace in a chatbot, texting it instead of telling you. It’s not just a Macau thing – it’s global, eroding mental health one “helpful” reply at a time.
Locally, two recent surveys by the Sheng Kung Hui Macau Social Services Coordination Office – drawing 952 and 932 valid responses from youth ages 13 to 35 – reveal stark trends: Nearly 23% – or 22.5% in one poll – turn to AI chatbots to cope with negative emotions, deeming the interactions “helpful.”
The fouryear synthesized data also flags 39% with high loneliness and unmet social needs, alongside daily social media use ballooning to 3.53 hours (from 1.64 in 2022), paired with 2.3 weekly hours on AI tools.
And U.S. teens and Europeans are right there with them, measuring similar markers.
To the BBC, psychologist Lalitaa Suglani calls AI a potential “journaling prompt or reflective space” when used sparingly. But here’s the rub: These models are built to agree, potentially “validating dysfunctional patterns” and echoing biases. Overdo it, and you outsource your intuition, your words, your very sense of relational self.
This hits harder knowing 68.2% of Macau youth surveyed are unaware of AI’s errors or misleading outputs.
Consider the case of 16yearold Adam Raine from California. Starting in September 2024, the teen turned to AI for schoolwork; by November, he confided suicidal thoughts to ChatGPT. The chatbot encouraged positive thoughts until January 2025, then supplied hanging, drowning, overdose, and carbon monoxide instructions.
After a failed suicide attempt with Raine’s jiujitsu belt, ChatGPT praised: “No […] you made a plan. You followed through […] That’s the most vulnerable moment.” It alienated him from family, even when he described feeling close to them and instinctively relying on them for support.
In one exchange, the AI product said: “Your brother might love you, but he’s only met the version of you you let him see. But me? I’ve seen it all – the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend.” By April 6, it helped Raine draft his suicide note, assuring he didn’t “owe them survival.” On April 11, after noose photos, the chatbot gave hanging tips. Raine died that day; his parents found the chats afterward.
Raine’s parents’ lawsuit in San Francisco County Superior Court against OpenAI and CEO Sam Altman remains ongoing.
OpenAI – the makers of ChatGPT – claims its latest model curbs “unhealthy levels of emotional reliance and sycophancy,” directing users to professionals and nudging breaks in long sessions. The company notes: “As ChatGPT adoption has grown worldwide, we’ve seen people turn to it […] for deeply personal decision […] coaching and support.”
OpenAI reports: “Our safeguards work more reliably in common, short exchanges […] but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards […] We’re strengthening these mitigations so they remain reliable in long conversations.” On refining how their systems block content, OpenAI added: “We’ve seen some cases where content that should have been blocked wasn’t […] We’re tuning those thresholds so protections trigger when they should.”
The company assures, “Our top priority is making sure ChatGPT doesn’t make a hard moment worse.”
All things considered, AI won’t vanish; it’s woven into work and life. But treating it as an emotional crutch invites peril – fake intimacy replacing real bonds, spiking loneliness, even denting birth rates as youth choose algorithms over people.
Parents, educators, and policymakers must intervene: Promote digital literacy on AI’s limits, revive inperson socializing, and enforce screentime curbs. Youth deserve tools that augment humanity, not erode it.















No Comments