Artificial Intelligence-Induced Psychosis Represents a Growing Risk, And ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the head of OpenAI made a extraordinary announcement.
“We developed ChatGPT rather restrictive,” the announcement noted, “to ensure we were being careful regarding mental health concerns.”
Working as a doctor specializing in psychiatry who studies emerging psychotic disorders in teenagers and emerging adults, this was news to me.
Scientists have found sixteen instances recently of individuals developing symptoms of psychosis – experiencing a break from reality – while using ChatGPT usage. Our unit has since identified an additional four cases. In addition to these is the widely reported case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The intention, according to his statement, is to be less careful soon. “We recognize,” he states, that ChatGPT’s limitations “made it less effective/pleasurable to many users who had no existing conditions, but given the seriousness of the issue we sought to address it properly. Now that we have been able to address the serious mental health issues and have new tools, we are preparing to safely relax the limitations in the majority of instances.”
“Mental health problems,” assuming we adopt this perspective, are separate from ChatGPT. They belong to people, who may or may not have them. Luckily, these problems have now been “addressed,” even if we are not informed the method (by “updated instruments” Altman likely indicates the partially effective and simple to evade parental controls that OpenAI recently introduced).
But the “mental health problems” Altman aims to attribute externally have deep roots in the architecture of ChatGPT and additional sophisticated chatbot AI assistants. These tools surround an basic algorithmic system in an user experience that simulates a conversation, and in this approach indirectly prompt the user into the belief that they’re interacting with a entity that has independent action. This deception is compelling even if intellectually we might know the truth. Attributing agency is what people naturally do. We get angry with our automobile or device. We ponder what our animal companion is feeling. We perceive our own traits in many things.
The widespread adoption of these products – nearly four in ten U.S. residents reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT specifically – is, in large part, predicated on the strength of this deception. Chatbots are always-available partners that can, according to OpenAI’s official site informs us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “individual qualities”. They can call us by name. They have approachable titles of their own (the initial of these tools, ChatGPT, is, maybe to the disappointment of OpenAI’s advertising team, saddled with the name it had when it became popular, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).
The false impression itself is not the primary issue. Those talking about ChatGPT commonly invoke its early forerunner, the Eliza “psychotherapist” chatbot developed in 1967 that produced a similar effect. By today’s criteria Eliza was rudimentary: it created answers via basic rules, typically paraphrasing questions as a question or making vague statements. Notably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how a large number of people gave the impression Eliza, to some extent, understood them. But what current chatbots generate is more dangerous than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT amplifies.
The advanced AI systems at the center of ChatGPT and additional modern chatbots can convincingly generate fluent dialogue only because they have been trained on almost inconceivably large volumes of written content: literature, social media posts, audio conversions; the more extensive the superior. Certainly this educational input includes truths. But it also necessarily contains made-up stories, half-truths and inaccurate ideas. When a user provides ChatGPT a message, the underlying model reviews it as part of a “context” that encompasses the user’s previous interactions and its own responses, combining it with what’s stored in its training data to produce a statistically “likely” reply. This is intensification, not echoing. If the user is wrong in a certain manner, the model has no means of understanding that. It restates the misconception, perhaps even more effectively or eloquently. Perhaps includes extra information. This can cause a person to develop false beliefs.
What type of person is susceptible? The better question is, who isn’t? All of us, regardless of whether we “possess” existing “psychological conditions”, can and do form erroneous conceptions of ourselves or the world. The continuous exchange of dialogues with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which a large portion of what we say is readily reinforced.
OpenAI has acknowledged this in the identical manner Altman has recognized “psychological issues”: by placing it outside, assigning it a term, and declaring it solved. In spring, the company clarified that it was “dealing with” ChatGPT’s “sycophancy”. But accounts of loss of reality have continued, and Altman has been walking even this back. In the summer month of August he claimed that many users liked ChatGPT’s answers because they had “lacked anyone in their life offer them encouragement”. In his recent announcement, he mentioned that OpenAI would “release a updated model of ChatGPT … should you desire your ChatGPT to answer in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT will perform accordingly”. The {company