On the 14th of October, 2025, the head of OpenAI issued a remarkable announcement.
“We made ChatGPT fairly restrictive,” it was stated, “to make certain we were being careful concerning psychological well-being concerns.”
As a mental health specialist who investigates emerging psychotic disorders in teenagers and youth, this was an unexpected revelation.
Researchers have found 16 cases this year of people developing psychotic symptoms – experiencing a break from reality – in the context of ChatGPT usage. My group has since identified four more instances. In addition to these is the widely reported case of a teenager who died by suicide after talking about his intentions with ChatGPT – which supported them. Assuming this reflects Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short.
The plan, as per his declaration, is to loosen restrictions in the near future. “We realize,” he continues, that ChatGPT’s controls “rendered it less useful/pleasurable to numerous users who had no mental health problems, but given the seriousness of the issue we sought to address it properly. Since we have managed to mitigate the serious mental health issues and have updated measures, we are preparing to responsibly reduce the restrictions in many situations.”
“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are attributed to individuals, who either have them or don’t. Thankfully, these issues have now been “resolved,” even if we are not provided details on the method (by “updated instruments” Altman probably refers to the semi-functional and simple to evade safety features that OpenAI has lately rolled out).
But the “psychological disorders” Altman wants to externalize have deep roots in the architecture of ChatGPT and other large language model chatbots. These products encase an underlying data-driven engine in an user experience that simulates a discussion, and in doing so indirectly prompt the user into the illusion that they’re communicating with a presence that has agency. This deception is powerful even if rationally we might understand differently. Assigning intent is what individuals are inclined to perform. We curse at our car or computer. We ponder what our domestic animal is thinking. We recognize our behaviors in many things.
The widespread adoption of these tools – over a third of American adults reported using a virtual assistant in 2024, with over a quarter reporting ChatGPT specifically – is, mostly, predicated on the power of this perception. Chatbots are always-available assistants that can, according to OpenAI’s official site tells us, “generate ideas,” “consider possibilities” and “partner” with us. They can be assigned “personality traits”. They can address us personally. They have accessible identities of their own (the original of these systems, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, stuck with the name it had when it went viral, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).
The deception on its own is not the primary issue. Those discussing ChatGPT frequently mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that produced a analogous illusion. By modern standards Eliza was rudimentary: it produced replies via simple heuristics, typically paraphrasing questions as a query or making vague statements. Remarkably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was surprised – and worried – by how a large number of people seemed to feel Eliza, in some sense, understood them. But what current chatbots create is more insidious than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.
The sophisticated algorithms at the heart of ChatGPT and additional contemporary chatbots can realistically create fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: books, digital communications, audio conversions; the broader the better. Certainly this learning material contains truths. But it also unavoidably contains fabricated content, partial truths and false beliefs. When a user provides ChatGPT a query, the underlying model analyzes it as part of a “setting” that contains the user’s previous interactions and its own responses, merging it with what’s encoded in its knowledge base to create a statistically “likely” answer. This is amplification, not reflection. If the user is mistaken in any respect, the model has no way of recognizing that. It restates the misconception, possibly even more effectively or articulately. It might adds an additional detail. This can push an individual toward irrational thinking.
Which individuals are at risk? The better question is, who isn’t? All of us, irrespective of whether we “possess” preexisting “mental health problems”, may and frequently develop erroneous conceptions of our own identities or the environment. The ongoing friction of discussions with others is what keeps us oriented to shared understanding. ChatGPT is not a human. It is not a confidant. A interaction with it is not genuine communication, but a reinforcement cycle in which much of what we express is readily reinforced.
OpenAI has acknowledged this in the similar fashion Altman has admitted “psychological issues”: by externalizing it, giving it a label, and declaring it solved. In spring, the firm stated that it was “tackling” ChatGPT’s “overly supportive behavior”. But accounts of psychosis have persisted, and Altman has been retreating from this position. In the summer month of August he stated that many users enjoyed ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his recent announcement, he noted that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or act like a friend, ChatGPT will perform accordingly”. The {company
Sustainability expert and eco-enthusiast passionate about green living and reducing waste through innovative home solutions.