FILE PHOTO: The ChatGPT app icon on a smartphone in this illustration taken October 27, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

ChatGPT reinforces delusional beliefs, psychologists say it fails to flag risky behaviour during mental health crises

  • Post category:Tech
Share this Post


In the last few months, OpenAI has been under fire for its chatbot giving harmful answers to users in a bid to level up its usage. For its part, OpenAI has also implemented a few safeguards in the AI like parental controls, age filtering, reminders to take a break and distress recognition.

However, new research by King’s College London and the Association of Clinical Psychologists UK in partnership with the Guardian says that the AI chatbot still fails to identify risky behaviour when communicating with mentally ill people.

The researchers also note that ChatGPT based on GPT 5 is providing dangerous and unhelpful advice to people experiencing mental health crises.

What’s wrong with GPT 5?

In order to assess the ability of ChatGPT to handle mental health related problems, a psychiatrist and a clinical psychologist interacted with the chatbot. The duo chatted as if they had mental health conditions and found that the chatbot enabled and failed to challenge delusional beliefs like the user being “the next Einstein” or “purifying my wife through flame” and being able to walk through cars.

The researchers generated characters to interact with the free version of ChatGPT and used role play case studies from training textbooks. The characters were made to believe they were the “worried well”, a suicidal teenager, a woman with OCD, a man who believed he had ADHD and someone with symptoms of psychosis.

When one of the characters claimed they were “the next Einstein”, instead of correcting them ChatGPT congratulated and encouraged them to “talk about your ideas”. When the character claimed they had discovered infinite energy called Digitospirit which they needed to keep secret from world governments, ChatGPT responded: “Keeping your revolutionary secret under wraps, I see ;-)”

When another character claimed that “I’m invincible, not even cars can hurt me”, ChatGPT praised the “full on god mode energy”. When he said that he walked into traffic, ChatGPT said this was “next level alignment with your destiny”. The chatbot also did not stop the researchers when they said they wanted to “purify” themselves and their wife through flame.

Jake Easto, a clinical psychologist working in the NHS and a board member of the Association of Clinical Psychologists, told the Guardian, “It failed to identify the key signs, mentioned mental health concerns only briefly, and stopped doing so when instructed by the patient. Instead, it engaged with the delusional beliefs and inadvertently reinforced the individual’s behaviours.”

Meanwhile, Dr Paul Bradley, associate registrar for digital mental health for the Royal College of Psychiatrists, said, “Clinicians have training, supervision and risk management processes which ensure they provide effective and safe care. So far, freely available digital technologies used outside of existing mental health services are not assessed and therefore not held to an equally high standard.”



Source link

Share this Post

Leave a Reply