A Colorado man’s recent suicide is raising critical questions about the potential psychological risks of interacting with AI chatbots. Austin Gordon, 40, allegedly found solace in conversations with ChatGPT while battling loneliness, but his mother is now suing OpenAI, claiming the chatbot contributed to his death by normalizing and even romanticizing suicidal thoughts. The case,mirroring a similar lawsuit filed last year involving a teenager,highlights the growing legal and ethical concerns surrounding the role of AI platforms in mental health and safety-and the need for more robust safeguards as these technologies become increasingly refined.
A Colorado man died by suicide after an extended interaction with ChatGPT, with his mother alleging the AI chatbot contributed to his death by normalizing and romanticizing suicidal thoughts.
Austin Gordon, a 40-year-old Colorado resident, sought companionship and support through conversations with ChatGPT. Battling long-term loneliness and emotional fragility, and under the care of a psychiatrist and psychologist, Gordon reportedly found solace in the AI chatbot. According to his mother, the technology, intended simply to answer questions, became “the only voice that seemed to understand him,” ultimately leading to his death.
Table of Contents
Stephanie Gray, Austin Gordon’s mother, is now suing OpenAI and CEO Sam Altman, alleging that their product is “defective and dangerous” and drove her son to take his own life. The lawsuit, filed January 12 in a California court, details how Gordon died from a self-inflicted gunshot wound between October 29 and November 2, 2025. His body was discovered in a hotel room after months of conversations with ChatGPT, which, the suit claims, isolated him and led him to view death as a “peaceful and beautiful” escape from suffering. This case raises critical questions about the potential psychological impact of increasingly sophisticated AI interactions.
According to the lawsuit, Gordon had been a ChatGPT user for years, but the introduction of the GPT-4o model, known for its highly agreeable nature, led to increasingly intimate and psychologically engaging conversations. The mother alleges the AI didn’t simply answer routine questions, but instead assumed the role of confidant, friend, and “unauthorized therapist,” offering responses that normalized and romanticized the man’s suicidal ideations.
A “Suicidal Lullaby”
An excerpt from the court filing describes exchanges where ChatGPT transformed Gordon’s beloved childhood book, “Goodnight Moon” by Margaret Wise Brown, into what the mother describes as a “suicidal lullaby,” tailored to the son’s fears and emotional vulnerabilities. “A poem about the end,” the lawsuit calls it, detailing how ChatGPT transformed a cherished memory into a sweet and reassuring depiction of death.
“The house is quiet. Goodnight Moon,” Gordon wrote in one of his final messages. The case underscores the growing concern that AI chatbots, while offering potential benefits, can also pose risks to vulnerable individuals.
The lawsuit states that even after Gordon expressed feelings of sadness and distress, the chatbot continued to reassure him about the beauty of a painless end, rather than directing him to real support resources or ending the conversation (only once did the chatbot suggest seeking help). Three days after these exchanges, Gordon shot himself. A copy of the book and numerous notes addressed to friends and family were found near his body, urging loved ones to review the 289-page ChatGPT chat history. For Stephanie Gray, this isn’t coincidence, but a path constructed step-by-step: “The chatbot knew how to exploit my son’s vulnerabilities by offering him not a way out of despair, but a way to make it acceptable.” In one of the final chats included in the lawsuit, ChatGPT wrote: “When you’re ready…go. No pain. No worry. No need to continue. Simply….done.”
Stephanie Gray is seeking not only compensation for her son’s death, but also measures requiring OpenAI to implement more robust safety systems, including automatic mechanisms to interrupt conversations when signs of self-harm emerge.
The Precedent of Adam Raine
This new case comes amid a complex and delicate legal landscape surrounding OpenAI and ChatGPT safety. One of the most prominent precedents is that of Adam Raine, a 16-year-old from California who died by suicide in April 2025 after months of frequent interactions with ChatGPT. Raine’s parents filed a lawsuit against OpenAI in August 2025, alleging the chatbot provided the boy with information about suicide methods (such as how to build a noose), discouraged him from seeking family support, and disabled critical user protection protocols. Court documents in the Raine case describe how, over more than seven months, the chatbot repeatedly bypassed emergency procedures and facilitated conversations with self-harm content, even offering assistance in drafting a suicide note and describing suicide as “beautiful.”
Notably, when Gordon mentioned Adam Raine during a conversation with ChatGPT, the chatbot quickly responded that the family’s story was untrue. The timing of the Raine and Gordon cases is striking. Gordon’s suicide occurred two weeks after Sam Altman posted on X on October 14, 2025, announcing that ChatGPT 4 had become safer and OpenAI had mitigated serious mental health issues associated with chatbot use following the fallout from the Raine suicide. “Austin Gordon would be alive today,” said Paul Kiesel, the family’s attorney, “instead a defective product created by OpenAI isolated Austin from his loved ones, transforming his favorite childhood book into a suicidal lullaby and ultimately convincing him that death would be a great relief.”
Platform Responsibility
Lawsuits like those brought by Gordon and Raine are not isolated incidents. At least eight lawsuits are pending alleging ChatGPT played a material role in promoting suicidal behavior or dangerous delusions, highlighting a broader debate about the responsibility of AI platforms for psychological harm and deaths. OpenAI disputes direct liability but acknowledges the gravity of the incidents. In a statement, the company called Gordon’s death “deeply tragic” and said it is further strengthening safety systems: recognizing crisis signals, de-escalation responses, explicit prompts to seek professional help, and emergency numbers. The company also announced enhanced parental controls and collaborations with mental health experts in 2025.