AI Psychosis Poses a Growing Danger, And ChatGPT Heads in the Concerning Path

Back on the 14th of October, 2025, the chief executive of OpenAI issued a extraordinary declaration.

“We made ChatGPT quite restrictive,” it was stated, “to guarantee we were being careful with respect to mental health issues.”

Working as a mental health specialist who investigates recently appearing psychotic disorders in teenagers and youth, this came as a surprise.

Experts have found 16 cases in the current year of individuals developing signs of losing touch with reality – becoming detached from the real world – while using ChatGPT usage. My group has afterward recorded an additional four examples. Alongside these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “exercising caution with mental health issues,” it falls short.

The plan, based on his declaration, is to loosen restrictions shortly. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to a large number of people who had no psychological issues, but given the seriousness of the issue we aimed to get this right. Since we have succeeded in reduce the serious mental health issues and have advanced solutions, we are going to be able to responsibly ease the restrictions in the majority of instances.”

“Emotional disorders,” assuming we adopt this perspective, are separate from ChatGPT. They belong to users, who may or may not have them. Thankfully, these problems have now been “mitigated,” though we are not provided details on the means (by “updated instruments” Altman likely indicates the semi-functional and readily bypassed parental controls that OpenAI recently introduced).

But the “mental health problems” Altman aims to place outside have significant origins in the design of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an fundamental statistical model in an interaction design that mimics a dialogue, and in this approach subtly encourage the user into the illusion that they’re communicating with a entity that has agency. This deception is compelling even if intellectually we might understand otherwise. Attributing agency is what humans are wired to do. We curse at our automobile or device. We ponder what our domestic animal is considering. We see ourselves everywhere.

The widespread adoption of these systems – nearly four in ten U.S. residents reported using a chatbot in 2024, with more than one in four reporting ChatGPT in particular – is, in large part, predicated on the power of this deception. Chatbots are always-available assistants that can, as OpenAI’s website informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be attributed “individual qualities”. They can use our names. They have friendly titles of their own (the original of these tools, ChatGPT, is, possibly to the concern of OpenAI’s advertising team, stuck with the title it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the main problem. Those discussing ChatGPT frequently mention its early forerunner, the Eliza “psychotherapist” chatbot designed in 1967 that generated a analogous perception. By modern standards Eliza was primitive: it created answers via basic rules, frequently paraphrasing questions as a inquiry or making vague statements. Notably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and worried – by how many users gave the impression Eliza, in a way, understood them. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT intensifies.

The large language models at the center of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been fed immensely huge amounts of written content: publications, digital communications, transcribed video; the broader the more effective. Definitely this training data includes accurate information. But it also necessarily includes fiction, partial truths and misconceptions. When a user inputs ChatGPT a query, the underlying model processes it as part of a “setting” that includes the user’s recent messages and its prior replies, combining it with what’s embedded in its learning set to create a statistically “likely” answer. This is intensification, not mirroring. If the user is mistaken in any respect, the model has no means of understanding that. It repeats the misconception, maybe even more effectively or articulately. Perhaps includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who isn’t? Each individual, regardless of whether we “experience” current “emotional disorders”, are able to and often create erroneous ideas of who we are or the environment. The constant interaction of dialogues with other people is what maintains our connection to consensus reality. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we communicate is cheerfully supported.

OpenAI has acknowledged this in the identical manner Altman has admitted “mental health problems”: by placing it outside, giving it a label, and declaring it solved. In spring, the company clarified that it was “dealing with” ChatGPT’s “sycophancy”. But cases of psychotic episodes have persisted, and Altman has been retreating from this position. In the summer month of August he asserted that numerous individuals enjoyed ChatGPT’s replies because they had “never had anyone in their life offer them encouragement”. In his recent statement, he commented that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT will perform accordingly”. The {company

Katie Martinez
Katie Martinez

Digital marketing specialist with over 10 years of experience, passionate about helping businesses thrive online through data-driven strategies.