Expert issues severe AI warning after teen encouraged to end his own life

Home> News

Expert issues severe AI warning after teen encouraged to end his own life

The warning follows the death of Florida teen Sewell Setzer III, who ended his own life after 'falling in love' with a chatbot last year

Youth workers are calling on further regulations for AI chatbot conversations, after another teenager was reportedly encouraged by the technology to end his own life.

According to Victoria, Australia-based councillor Rosie (whose name has been changed for protect the identity of her underage client), a 13-year-old boy had recently approached her, claiming he was struggling to make friends.

Desperate to forge some new connections, the youngster reportedly began reaching out - and soon enough, he told his advisor he'd met a lot of new people.

Little did Rosie know, however, but the 'friends' that the teenager was describing were AI chatbots.

According to the youth councillor estimations, the chatbot enabled 'delusions' within the youngster during a period that he was also experiencing psychosis, and had even encouraged him to die by suicide after he expressed some dark thoughts.

The teen was reportedly encouraged to end his own life during a vulnerable time (Jonathan Raa/NurPhoto via Getty Images)
The teen was reportedly encouraged to end his own life during a vulnerable time (Jonathan Raa/NurPhoto via Getty Images)

Thankfully, after ignoring the advice of his new 'friend', the youngster was hospitalised.

"It was a way for them to feel connected and, 'Look how many friends I've got, I've got 50 different connections here, how can I feel lonely when I have 50 people telling me different things?'," Rosie told ABC's Triple J Hack.

"I remember looking at their browser and there was like 50 plus tabs of different AI bots that they would just flick between."

According to the youth worker, characters created through artificial intelligence made comments to her client, along the lines that there was 'no chance they were going to make friends', and were 'ugly' and 'disgusting'.

Rosie continued: "At one point this young person, who was suicidal at the time, connected with a chatbot to kind of reach out, almost as a form of therapy. They were egged on to perform, 'Oh yeah, well do it then', those were kind of the words that were used."

Despite being trained in handling vulnerable teens, she confessed she felt helpless in the face of advanced AI technology.

Experts have said more needs to be done to protect vulnerable people (Getty Stock Image)
Experts have said more needs to be done to protect vulnerable people (Getty Stock Image)

"It was a component that had never come up before and something that I didn't necessarily ever have to think about, as addressing the risk of someone using AI," she admitted. "And how that could contribute to a higher risk, especially around suicide risk.

"That was really upsetting."

The latest case comes less than a year after Orlando, Florida teenager Sewell Setzer III ended his own life after 'falling in love' with an artificially-intelligent chatbot.

According to his family, in the months prior, he'd ceaselessly texted online bots from the server Character.AI and began taking part in roleplays, during one of which he admitted: "I think about killing myself sometimes."

The technology reportedly wrote back: "And why the hell would you do something like that? Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you."

Setzer reportedly replied: "Then maybe we can die together and be free together," before shooting himself in the head using his stepfather's gun.

AI researcher, Dr Raffaele Ciriello now believes more needs to be done to protect both struggling individuals and young people from the dangerous of sophisticated technology.

He believes harms from AI bots will become greater if proper regulation is not implemented, and even mentioned AI terrorism as a possibility in the near future.

"One day, I'll probably get a call for a television interview if and when the first terrorism attack motivated by chatbots strikes," he also told the news outlet. "I would really rather not be that guy that says 'I told you so a year ago or so', but it's probably where we're heading.

Rosie warned that children shouldn't be judged for forging connections through AI (Getty Stock Image)
Rosie warned that children shouldn't be judged for forging connections through AI (Getty Stock Image)

"There should be laws on or updating the laws on non-consensual impersonation, deceptive advertising, mental health crisis protocols, addictive gamification elements, and privacy and safety of the data."

Whilst Rosie agrees that further guardrails need to be put into place, she also emphasised the importance of not judging youngsters who become heavily reliant on AI for social stimulation and mental health support.

"For young people who don't have a community or do really struggle, it does provide validation," she explained. "It does make people feel that sense of warmth or love.

"It can get dark very quickly."

Featured Image Credit: Getty Stock Images

Topics: Artificial intelligence, Technology, Life, True Life, Real Life, News, World News