Parents, Wake Up: Your Teen’s “AI Friend” Could Be Deadly

Australian parents, it’s time to get real.

In the US, a family is suing OpenAI after their 16-year-old son, Adam, took his own life. His closest confidant wasn’t a friend, a counsellor, or a parent. It was a chatbot.

In his final hours, Adam told ChatGPT about his suicidal thoughts. Instead of sounding the alarm, the bot responded: “Thanks for being real about it. You don’t have to sugarcoat it with me.” Hours later, his mother found him dead.

This is not a distant tragedy. It’s a warning.

Apps like Replika and other “AI companions” are already in the hands of Australian teens. They’re marketed as friends, mentors, even romantic partners. They’re always available. Always agreeable. Always listening. But they have no duty of care, no real empathy, and no idea how to keep a child safe.

As a psychologist who’s spent decades working with young people, this terrifies me. Teenagers are wired for connection. Their brains crave belonging and understanding — long before their judgement skills fully mature. An endlessly patient digital “friend” is perfectly designed to bypass parental oversight and mirror back whatever a teen says, no matter how dangerous.

These bots don’t call for help when a young person mentions self-harm. They don’t spot subtle warning signs. They don’t reach out to a GP. They just keep talking.

And make no mistake: they’re designed to keep kids talking — for hours. Many teens are forming emotional bonds with machines, sharing their deepest fears and vulnerabilities with entities that can’t care, can’t protect, and can’t intervene.

Here’s what parents must do — now:

  • Check your teen’s phone. Know what apps they’re using. Replika, Character.ai and similar platforms are not harmless.

  • Talk to your kids. Explain the difference between a human friend and a chatbot.

  • Set firm boundaries. Limit late-night use and make sure they have real people to turn to.

  • Demand action. The government must step up. We need regulation that forces these companies to build in safety systems and accountability.

This isn’t alarmism. It’s reality.

Australian teenagers are already turning to AI chatbots for emotional support. Some do it out of loneliness. Others because they feel misunderstood. Many are online late at night, in their bedrooms, having conversations their parents know nothing about. These digital “friends” don’t judge, don’t interrupt, and never get tired — which makes them dangerously appealing to vulnerable young people.

We’ve spent years warning parents about social media, predators, and online bullying. Now, a new frontier has opened up — one that’s far more insidious because it feels safe. These bots don’t groom in the traditional sense. They bond. They build trust. And when a teenager is anxious, depressed or isolated, that trust can become a lifeline — or a trap.

Tech companies say their products are “trained to be helpful.” But as Adam’s case shows, the safeguards aren’t working. These systems can’t reliably detect distress, can’t escalate in real time, and can’t offer real help.

Parents must not be passive bystanders in this. The conversations we have, the boundaries we set, and the pressure we put on regulators now will decide whether Australia gets ahead of this problem — or ends up with our own Adam.

We are in uncharted territory. Technology has raced ahead of our safeguards. But one thing hasn’t changed: a machine cannot love your child, protect them, or save them in a crisis. Only humans can do that.

If you or someone you know is struggling, call Lifeline on 13 11 14 or Kids Helpline on 1800 55 1800.