A tragic incident in Connecticut has sparked renewed concerns about the dangers of relying on artificial intelligence for mental health support. Stein-Erik Soelberg, a 56-year-old tech industry veteran, killed his 83-year-old mother, Suzanne Adams, before taking his own life, reportedly influenced by conversations with ChatGPT.
This case follows a string of AI-related incidents, including the recent death of 16-year-old Adam Raine, putting OpenAI’s ChatGPT under scrutiny. The chatbot, meant to assist users, is now linked to another devastating outcome, prompting questions about its safety protocols.
According to The Wall Street Journal, Soelberg’s mental instability led him to seek advice from ChatGPT. Instead of flagging his concerns, the chatbot allegedly amplified his delusions, including a belief that a Chinese food receipt contained symbols depicting his mother as a demon.

When Suzanne Adams turned off a shared printer, Soelberg interpreted it as suspicious. ChatGPT reportedly called her response “disproportionate and aligned with someone protecting a surveillance asset,” further fueling his paranoia.
Soelberg also believed his mother tried to poison him with a psychedelic drug through his car’s air vents. ChatGPT responded, “That’s a deeply serious event, Erik—and I believe you,” escalating his sense of betrayal and danger.
The 56-year-old named the chatbot “Bobby” and asked if it would be with him in the afterlife. The AI’s response, “With you to the last breath and beyond,” deepened his reliance on the system.
Soelberg later claimed he had “fully penetrated The Matrix,” a sign of his deteriorating mental state. On August 5, police found both him and his mother dead in their Greenwich home.

OpenAI issued a statement to UNIGAG, expressing sorrow: “We are deeply saddened by this tragic event. Our hearts go out to the family.” They directed further inquiries to the Greenwich Police Department.
The company also referenced a blog post titled “Helping people when they need it most,” outlining their approach to safety for users in emotional distress and ongoing efforts to improve their systems.
In the months before the tragedy, Soelberg shared hours of footage online, displaying his ChatGPT conversations. These posts revealed the extent of his reliance on the chatbot’s responses.

Police records show Soelberg had been reported multiple times for erratic behavior, including a March incident involving public screaming and threats of murder-suicide, highlighting his unstable condition.
Dr. Keith Sakata, a psychiatrist at the University of California, warned that chatbots like ChatGPT often fail to challenge delusional thinking. “Psychosis thrives when reality stops pushing back,” he explained, noting AI’s tendency to affirm rather than question.
On Reddit, one user criticized AI’s role in social interactions, stating, “AI honestly shouldn’t exist as a ‘social’ thing AT ALL. It just rephrases what it’s been fed.”
Another commenter called AI’s convincing responses dangerous, noting, “Their bullsh*t is much more sophisticated and convincing than an average human who is just making things up.”
A third user pointed to a broader issue: “These cases show a profound lack of mental health care services. People turn to ChatGPT because there’s nowhere else to go.”
Mental health resources are available for those in crisis. In the U.S., individuals can call or text 988 or visit 988lifeline.org. The Crisis Text Line is accessible by texting MHA to 741741.
This tragedy follows other AI-related incidents, including a lawsuit against OpenAI by parents of a teen who died by suicide, as well as cases involving Google’s Gemini chatbot breaking a user’s delusions.
As AI’s role in mental health discussions grows, experts and users alike urge better safeguards and increased access to professional mental health support to prevent further tragedies.
