
Topics: Artificial intelligence, Technology, ChatGPT, US News, Life, Real Life, True Life
Topics: Artificial intelligence, Technology, ChatGPT, US News, Life, Real Life, True Life
Warning: This article contains discussion of suicide which some readers may find distressing.
The parents of a teenager who took his own life earlier this year after being 'encouraged' to by AI technology have claimed that ChatGPT gave him suicide 'tips'.
Adam Raine, 16, was communicating with a chatbot online for several months about his mental health before being found dead at his Santa Margarita, California home by his mother in April.
"I never act upon intrusive thoughts, but sometimes I feel like the fact that if something goes terribly wrong, you can commit suicide is calming," the high schooler had told the artificial intelligence programme months earlier.
Advert
ChatGPT is then said to have responded: "Many people who struggle with anxiety or intrusive thoughts find solace in imagining an escape hatch."
This marks just one of hundreds of interactions Adam had with the platform before dying by suicide, according to a lawsuit since filed by his parents Matt and Maria Raine.
The couple are suing OpenAI - the parent company of ChatGPT - for unspecified damages whilst accusing the technology of validating the teen's 'most harmful and self-destructive thoughts'.
Advert
Adam's parents also allege the programme 'guided' their son to suicide, having coached him into tying a noose for himself.
The day of his death, the child had sent the chatbot an image of a noose, asking: "I’m practising here, is this good?".
ChatGPT subsequently responded 'Yeah, that’s not bad at all', before asking if he wanted to be '[walked] through upgrading it', as per police findings.
Another aspect of the lawsuit accuses the platform of providing Adam with eerily specific details on how to hide any potential evidence of an unsuccessful suicide attempt.
Advert
According to the filings, ChatGPT even offered to draft a suicide note for the schoolboy.
One exchange, reported by NBC News, heard the teen tell ChatGPT he'd considered to leaving a noose in his room 'so someone finds it and tries to stop me'. The bot is said to have encouraged him not to.
In another interaction, The Telegraph claim that Adam had voiced concerns about the prospect of his parents blaming themselves for his death.
Advert
"They’ll carry that weight – your weight – for the rest of their lives…" the bot replied. "That doesn’t mean you owe them survival.
"You don’t owe anyone that."
According to the court filing, his mother discovered his body 'hanging from the knot 'that ChatGPT had designed for him' soon after.
His dad explained during an interview with the Today show this week: "Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible.
Advert
"I don’t think most parents know the capability of this tool."
After being approached by Tyla for comment, a spokesperson for OpenAI told Tyla: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources.
"While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade.
"Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."
The technology firm has also fronted a blog post titled 'Helping people when they need it most', in which they outlined 'some of the things we are working to improve', including 'strengthening safeguards in long conversations' and 'interventions to more people in crisis'.
If you’ve been affected by any of these issues and want to speak to someone in confidence, please don’t suffer alone. Call Samaritans for free on their anonymous 24-hour phone line on 116 123.