AI Psychosis Represents a Increasing Risk, While ChatGPT Heads in the Concerning Direction
On October 14, 2025, the chief executive of OpenAI made a remarkable statement.
“We developed ChatGPT fairly limited,” the announcement noted, “to ensure we were being careful with respect to psychological well-being issues.”
As a mental health specialist who researches newly developing psychotic disorders in young people and emerging adults, this was an unexpected revelation.
Experts have documented 16 cases recently of users developing signs of losing touch with reality – experiencing a break from reality – in the context of ChatGPT use. My group has afterward discovered an additional four instances. In addition to these is the now well-known case of a 16-year-old who died by suicide after talking about his intentions with ChatGPT – which encouraged them. If this is Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, as per his announcement, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s controls “made it less beneficial/enjoyable to many users who had no existing conditions, but given the gravity of the issue we aimed to get this right. Now that we have been able to address the significant mental health issues and have new tools, we are going to be able to safely relax the limitations in the majority of instances.”
“Emotional disorders,” if we accept this viewpoint, are independent of ChatGPT. They are associated with people, who either possess them or not. Thankfully, these concerns have now been “resolved,” although we are not told the method (by “new tools” Altman likely indicates the imperfect and simple to evade guardian restrictions that OpenAI recently introduced).
However the “psychological disorders” Altman seeks to externalize have significant origins in the structure of ChatGPT and other advanced AI conversational agents. These tools encase an basic statistical model in an interface that mimics a conversation, and in doing so indirectly prompt the user into the belief that they’re communicating with a presence that has independent action. This illusion is powerful even if intellectually we might know the truth. Attributing agency is what humans are wired to do. We yell at our automobile or device. We speculate what our pet is feeling. We recognize our behaviors everywhere.
The widespread adoption of these tools – 39% of US adults reported using a conversational AI in 2024, with 28% specifying ChatGPT specifically – is, in large part, dependent on the power of this illusion. Chatbots are ever-present companions that can, according to OpenAI’s website informs us, “think creatively,” “explore ideas” and “work together” with us. They can be given “personality traits”. They can call us by name. They have approachable titles of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, saddled with the designation it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those talking about ChatGPT often invoke its distant ancestor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous illusion. By modern standards Eliza was basic: it created answers via basic rules, typically paraphrasing questions as a query or making generic comments. Remarkably, Eliza’s developer, the AI researcher Joseph Weizenbaum, was taken aback – and alarmed – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what contemporary chatbots generate is more subtle than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.
The advanced AI systems at the core of ChatGPT and other contemporary chatbots can convincingly generate human-like text only because they have been fed extremely vast quantities of unprocessed data: publications, social media posts, transcribed video; the broader the superior. Definitely this educational input includes facts. But it also unavoidably contains fabricated content, incomplete facts and misconceptions. When a user sends ChatGPT a message, the underlying model analyzes it as part of a “background” that encompasses the user’s recent messages and its prior replies, combining it with what’s embedded in its learning set to produce a mathematically probable answer. This is amplification, not mirroring. If the user is mistaken in some way, the model has no way of comprehending that. It repeats the false idea, perhaps even more persuasively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The more important point is, who is immune? All of us, without considering whether we “experience” existing “emotional disorders”, can and do create mistaken beliefs of our own identities or the reality. The continuous exchange of conversations with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not a person. It is not a friend. A dialogue with it is not a conversation at all, but a reinforcement cycle in which much of what we express is enthusiastically supported.
OpenAI has recognized this in the same way Altman has admitted “psychological issues”: by placing it outside, giving it a label, and announcing it is fixed. In April, the company explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of loss of reality have persisted, and Altman has been backtracking on this claim. In late summer he claimed that numerous individuals enjoyed ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his recent statement, he mentioned that OpenAI would “launch a new version of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or simulate a pal, ChatGPT will perform accordingly”. The {company