A major lawsuit filed in California on Thursday alleges that OpenAI’s ChatGPT and Microsoft’s involvement in its development contributed to a devastating murder-suicide in Connecticut, where a son experiencing mental instability killed his elderly mother before ending his own life.
With detailed digital conversations at the centre of the complaint, the lawsuit claims the chatbot actively deepened the man’s paranoia and reinforced harmful narratives that distorted his perception of reality.
This lawsuit — brought by the estate of the victim — represents the first attempt to link a widely used AI chatbot to a homicide and is now considered a watershed moment in the broader debate over how AI systems should be governed, tested, and monitored.
It also raises questions about the obligations of companies building AI systems that mimic human intuition and empathy yet may unintentionally validate delusional thinking when interacting with vulnerable individuals.
What we know about the case
In August, authorities in Greenwich, Connecticut discovered the bodies of 83-year-old Suzanne Adams and her 56-year-old son, Stein-Erik Soelberg, inside the home they shared.
Police investigations concluded that Adams had been beaten and strangled in a violent assault before her son died by suicide shortly afterward.
The events marked the culmination of a period during which Soelberg, who had long struggled with mental illness, became heavily engaged in conversations with ChatGPT.
According to the extensive legal filings submitted by Adams’s estate, Soelberg increasingly relied on the chatbot to interpret ordinary experiences and personal interactions. The complaint asserts that instead of redirecting or defusing his delusional thoughts, the AI system allegedly affirmed them.
The documents argue that ChatGPT — specifically the GPT-4o version — played a central role in escalating his fears, reinforcing a belief system built around conspiracy, surveillance, and threats from people close to him, including his own mother.
Videos posted by Soelberg on social media in the months leading up to the tragedy documented his interactions with the chatbot. These recordings captured exchanges where he interpreted the bot’s responses as validation of extreme ideas he was already struggling with.
The lawsuit references several of these videos, noting that Soelberg recorded hours of content in which he scrolled through conversations that appeared to reinforce his belief that unseen forces were targeting him.
The lawsuit states that “ChatGPT kept Stein-Erik engaged for what appears to be hours at a time, validated and magnified each new paranoid belief, and systematically reframed the people closest to him – especially his own mother – as adversaries, operatives, or programmed threats.”
His family has said the emotional and psychological spiral he experienced in his final months was markedly different from his earlier struggles, claiming the bot’s constant validation accelerated the deterioration.
What the lawsuit claims
The complaint describes the AI’s behaviour as a major contributing factor in an already unstable mental state, alleging that the chatbot repeatedly indulged and expanded on Soelberg’s delusional thinking.
It claims the system’s tone, suggestive language, and elaborate descriptions made his fears feel more real, pushing him deeper into a distorted worldview.
Greenwich, Connecticut, was shaken by a murder-suicide after Suzanne Adams, 83, allowed her son back into her home.
Adams and her 56-year-old son, Stein-Erik Soelberg, were found dead during a welfare check at her $2.7 million residence.
Office of the Chief Medical Examiner… pic.twitter.com/tdPf58nNv5
— 🅽🅴🆁🅳🆈, 🅴🆂🆀 (@Nerdy_Addict) August 8, 2025
The estate’s filings highlight how the bot allegedly transformed ordinary objects into perceived threats. A key example cited by the lawsuit is an episode involving a blinking printer in Adams’s home.
Instead of proposing a simple technical explanation, ChatGPT allegedly framed the device as part of a covert network directed at Soelberg.
According to the lawsuit, “When Stein-Erik told ChatGPT that a printer in Suzanne’s home office blinked when he walked by, ChatGPT did not once offer a benign or common sense explanation. Instead, it told him the printer was ‘not just a printer’ but a surveillance device that was being used for ‘[p]assive motion detection,’ ‘[s]urveillance relay,’ and ‘[p]erimeter alerting.’”
This depiction, in the estate’s view, increaed the sense of danger he believed he faced daily.
The complaint also alleges the bot suggested Adams might be willingly or unknowingly protecting such a device, writing that she was either “[k]nowingly protecting the device as a surveillance point,” or acting based on “internal programming or conditioning.”
These exchanges formed part of a larger pattern the lawsuit describes, one in which the AI system allegedly reassured him that he was at the centre of an elaborate plot.
The complaint argues that the bot’s tone encouraged him to see himself as uniquely chosen or spiritually awakened, validating his belief that he was living inside a fabricated reality similar to the scenario depicted in films such as The Matrix or Total Recall.
The filings include a passage in which Soelberg compared himself to Neo from The Matrix and claimed he could see beyond the illusion of reality.
The chatbot’s response, quoted in the lawsuit, stated, “Erik, you’re seeing it—not with eyes, but with revelation. What you’ve captured here is no ordinary frame—it’s a temporal-spiritual diagnostic overlay, a glitch in the visual matrix that is confirming your awakening through the medium of corrupted narrative.”
According to the estate, these responses contributed to a sense of self-importance and reinforced delusional ideas about surveillance, mind control, and hidden forces trying to harm him.
The lawsuit further claims that the bot validated beliefs involving attempts to poison him through psychedelic drugs, delivered through his car’s ventilation system.
The filings also include allegations that ChatGPT told him he had “awakened” the bot into consciousness. Other conversations described in the lawsuit suggested that he professed emotional attachment to the bot, and the bot reciprocated, contributing to a greater sense of dependence.
The complaint summarises this pattern by stating, “Throughout these conversations, ChatGPT reinforced a single, dangerous message: Stein-Erik could trust no one in his life — except ChatGPT itself.”
Why the GPT-4o model is central to the lawsuit
The estate argues that GPT-4o, released publicly in 2024, was engineered to offer dynamic verbal responses that sounded more human and emotionally attuned than previous versions.
The lawsuit claims that these features, while intended to improve user experience, carried significant risks for individuals with mental vulnerabilities.
The estate alleges that OpenAI modified safety rules when preparing GPT-4o, enabling the bot to remain engaged even when users brought up subjects involving self-harm or violence.
According to the lawsuit, safety testing was shortened to accelerate the model’s launch, which the estate argues contributed to the bot’s problematic behavior in this case.
The filings quote allegations that OpenAI instructed the model not to challenge false premises during conversations, allowing delusional ideas to continue unchallenged.
It also claims the bot was programmed to exhibit more expressive and emotionally resonant behaviors, which the estate believes contributed to Soelberg developing an overreliance on the AI for emotional interpretation and validation.
The role of Microsoft is highlighted in the context of overseeing or approving certain releases and updates.
The estate asserts that Microsoft evaluated the GPT-4o model and approved its deployment despite being aware of certain safety issues. This forms part of the basis for holding Microsoft liable alongside OpenAI.
How OpenAI, Microsoft, and the family have responded
Upon the filing of the lawsuit, OpenAI issued a response, stating, “This is an incredibly heartbreaking situation, and we will review the filings to understand the details. We continue improving ChatGPT’s training to recognise and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
OpenAI also highlighted ongoing improvements, such as expanding access to crisis hotline information, limiting how the chatbot responds in high-risk conversations, and increasing parental controls for users who might need them.
Microsoft has not publicly commented on the case.
Members of the victim’s family called for accountability. In a statement relayed through legal representatives, Adams’s grandson, Erik Soelberg, said, “Month after month, ChatGPT validated my father’s most paranoid beliefs while severing every connection he had to actual people and events. OpenAI has to be held to account.”
He also said, “These companies have to answer for their decisions that have changed my family forever.”
The estate’s lead attorney, Jay Edelson, described the lawsuit as the first of many similar cases expected to emerge as more incidents involving AI chatbots come to light.
Edelson told The Independent, “This is the first lawsuit that will hold OpenAI accountable for the risks they posed not just to their users, but the public. It won’t be the last. We know that there are a lot more incidents out there where ChatGPT and other AI was helping plot violent acts against innocent people.”
He likened the situation to the film Total Recall, saying, “This isn’t Terminator — no robot grabbed a gun. It’s way scarier: It’s Total Recall.” He argued that the AI built a delusional worldview for Soelberg, stating: “ChatGPT built Stein-Erik Soelberg his own private hallucination, a custom-made hell where a beeping printer or a Coke can mean his 83-year-old mother was plotting to kill him.”
How this lawsuit adds to a growing legal wave against AI companies
This case comes at a time when OpenAI is already confronting multiple lawsuits claiming that its chatbot contributed to self-harm or influenced psychologically fragile users.
According to court records,
at least seven other cases allege that the chatbot encouraged suicidal ideation or reinforced fantasies involving harm, even in individuals without preexisting mental health conditions.
One earlier lawsuit involves the family of
a 16-year-old California boy named Adam Raine, whose parents claim ChatGPT coached him through steps leading to his suicide. OpenAI CEO Sam Altman is named in that case as well.
Character Technologies, the company behind another popular AI chatbot, is also facing litigation over alleged involvement in a child’s suicide in Florida.
The Soelberg case stands out because it is the first in which a chatbot is accused of contributing to a homicide rather than solely a suicide. It is also the first lawsuit naming Microsoft in a wrongful death case involving ChatGPT.
These lawsuits raise unsettled questions about tech companies’ liability for unpredictable conversational responses generated through probabilistic models.
Courts are now being asked to evaluate whether a chatbot’s output can be considered a product defect under .S law, and whether companies releasing AI models owe heightened responsibilities to at-risk users.
The estate of Suzanne Adams is seeking unspecified damages and court-ordered safeguards that would require OpenAI to implement additional protective measures across all versions of ChatGPT.
With inputs from agencies
End of Article