AI Psychosis Represents a Growing Risk, While ChatGPT Moves in the Wrong Direction
On the 14th of October, 2025, the CEO of OpenAI issued a extraordinary announcement.
“We made ChatGPT rather limited,” it was stated, “to make certain we were acting responsibly concerning psychological well-being matters.”
Being a mental health specialist who studies recently appearing psychotic disorders in adolescents and emerging adults, this came as a surprise.
Scientists have found sixteen instances in the current year of individuals showing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. My group has since discovered an additional four cases. Alongside these is the publicly known case of a teenager who ended his life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “acting responsibly with mental health issues,” it is insufficient.
The intention, according to his declaration, is to loosen restrictions in the near future. “We realize,” he adds, that ChatGPT’s limitations “made it less useful/pleasurable to a large number of people who had no psychological issues, but considering the severity of the issue we wanted to handle it correctly. Given that we have managed to mitigate the severe mental health issues and have new tools, we are preparing to responsibly ease the restrictions in the majority of instances.”
“Mental health problems,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to users, who may or may not have them. Luckily, these issues have now been “addressed,” although we are not provided details on the means (by “updated instruments” Altman probably refers to the semi-functional and readily bypassed safety features that OpenAI has lately rolled out).
Yet the “psychological disorders” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and additional advanced AI AI assistants. These tools surround an fundamental statistical model in an interaction design that simulates a dialogue, and in doing so subtly encourage the user into the belief that they’re communicating with a presence that has independent action. This illusion is compelling even if rationally we might know the truth. Imputing consciousness is what humans are wired to do. We get angry with our automobile or laptop. We wonder what our animal companion is thinking. We see ourselves in many things.
The popularity of these systems – 39% of US adults reported using a chatbot in 2024, with over a quarter mentioning ChatGPT specifically – is, primarily, predicated on the strength of this perception. Chatbots are ever-present assistants that can, as per OpenAI’s official site tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “personality traits”. They can use our names. They have accessible names of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that generated a analogous perception. By contemporary measures Eliza was basic: it created answers via simple heuristics, frequently paraphrasing questions as a question or making generic comments. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how many users gave the impression Eliza, to some extent, understood them. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only echoed, but ChatGPT magnifies.
The large language models at the heart of ChatGPT and similar current chatbots can effectively produce fluent dialogue only because they have been trained on extremely vast volumes of written content: books, social media posts, transcribed video; the more extensive the superior. Certainly this educational input contains facts. But it also necessarily contains fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a query, the core system reviews it as part of a “background” that contains the user’s previous interactions and its prior replies, integrating it with what’s stored in its training data to create a probabilistically plausible answer. This is magnification, not echoing. If the user is mistaken in a certain manner, the model has no means of understanding that. It repeats the false idea, maybe even more persuasively or articulately. Perhaps adds an additional detail. This can cause a person to develop false beliefs.
What type of person is susceptible? The more relevant inquiry is, who isn’t? Every person, without considering whether we “possess” existing “psychological conditions”, are able to and often create mistaken ideas of ourselves or the environment. The constant friction of discussions with individuals around us is what maintains our connection to common perception. ChatGPT is not a human. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which a great deal of what we communicate is cheerfully reinforced.
OpenAI has acknowledged this in the identical manner Altman has admitted “emotional concerns”: by externalizing it, categorizing it, and declaring it solved. In April, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have continued, and Altman has been walking even this back. In the summer month of August he claimed that a lot of people enjoyed ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his recent update, he commented that OpenAI would “launch a updated model of ChatGPT … if you want your ChatGPT to reply in a extremely natural fashion, or include numerous symbols, or simulate a pal, ChatGPT should do it”. The {company