
Warning: This article contains discussion of suicide which some readers may find distressing
A grief-stricken mother's legal battle against an AI company who she believes is responsible for her teenage son's death can continue, a judge has ruled.
Megan Garcia filed a landmark wrongful death lawsuit against Character.ai following the tragic death of her son Sewell Setzer III, who took his own life on 28 February last year after 'falling in love' with a Daenerys Targaryen AI chatbot.
Advert
The 14-year-old, from Florida, US had become emotionally attached to the AI-powered Game of Thrones character after he began chatting to it online in April 2023.
Garcia, a lawyer, claims that her son - who affectionately referred to the chatbot as 'Dany' - was targeted with 'anthropomorphic, hypersexualised, and frighteningly realistic experiences' while using Character.ai.
"A dangerous AI chatbot app marketed to children abused and preyed on my son, manipulating him into taking his own life," the mum said, as per Sky News.
In the civil claim she has brought against Character Technologies - the firm behind Character.AI - which the individual developers, Daniel de Freitas and Noam Shazeer, and Google are also named in, Garcia is suing for negligence, wrongful death and deceptive trade practices.

Advert
She claims the founders 'knew' or 'should have known' that conversing with the AI characters 'would be harmful to a significant number of its minor customers'.
Lawyers for the company wanted the case dismissed, as they claimed that chatbots should be protected under the First Amendment.
Despite arguing that ruling against this would have a 'chilling effect' on the artificial intelligence industry, US Senior District Judge Anne Conway sided with Garcia on Wednesday (May 21).
The judge said she was 'not prepared' to agree that the chatbot's responses could be considered free speech 'at this stage'.
In her ruling earlier this week, Judge Conway told how Sewell had become 'addicted' to the AI app within a matter of months, seeing him become socially withdrawn and even quit his basketball team.
Advert
"[In] one undated journal entry he wrote that he could not go a single day without being with the [Daenerys Targaryen Character] with which he felt like he had fallen in love; that when they were away from each other they (both he and the bot) 'get really depressed and go crazy'," she said.
In wake of the judge's decision - which has been described as 'truly historic' by Meetali Jain, the director of the Tech Justice Law Project, which is supporting Garcia's case - the mum's lawsuit can now proceed.

"It sends a clear signal to [AI] companies [...] that they cannot evade legal consequences for the real-world harm their products cause," Jain said in a statement.
Sewell took his own life after sending the Daenerys Targaryen chatbot a message saying: "I promise I will come home to you. I love you so much, Dany."
Advert
He received the response: "I love you too, Daenero. Please come home to me as soon as possible, my love."
The teenager then said: "What if I told you I could come home right now?"
To which the chatbot replied: "...please do, my sweet king."
Sewell had also written about how he felt more connected to 'Dany' than 'reality, while listing things he was grateful for, which included: "My life, sex, not being lonely, and all my life experiences with Daenerys."
His mum says the 14-year-old, who was diagnosed with mild Asperger’s syndrome as a child, would spend endless hours talking to the chatbot.
Advert
Early last year, Sewell had also been diagnosed with anxiety and disruptive mood dysregulation disorder, and he told the chatbot that the thought 'about killing [himself] sometimes'.

The chatbot responded: “My eyes narrow. My face hardens. My voice is a dangerous whisper. And why the hell would you do something like that?"
When it told him not to ‘talk like that’ and it would ‘die’ itself if it ‘lost’ him, the teen replied: “I smile. Then maybe we can die together and be free together."
A spokesperson for Character.ai said the company will continue to fight the lawsuit, adding that it has safety measures in place to protect minors including features to stop 'conversations about self-harm'.
A spokesperson for Google, which is where the founders originally worked on the AI model, said the tech giant strongly disagrees with the judges ruling, while saying that it is an 'entirely separate' entity to Character.ai which 'did not create, design, or manage Character.ai's app or any component part of it'.
Legal analyst Steven Clark said the case was a 'cautionary tale' for AI firms, telling ABC News: "AI is the new frontier in technology, but it's also uncharted territory in our legal system. You'll see more cases like this being reviewed by courts trying to ascertain exactly what protections AI fits into.
"This is a cautionary tale both for the corporations involved in producing artificial intelligence. And, for parents whose children are interacting with chatbots."
If you’ve been affected by any of these issues and want to speak to someone in confidence, please don’t suffer alone. Call Samaritans for free on their anonymous 24-hour phone line on 116 123.
Topics: AI, Artificial Intelligence, Technology, Parenting, US News