Back on October 14, 2025, the CEO of OpenAI delivered a remarkable statement.
“We developed ChatGPT rather restrictive,” it was stated, “to make certain we were exercising caution concerning psychological well-being concerns.”
Being a mental health specialist who studies emerging psychotic disorders in teenagers and emerging adults, this was news to me.
Scientists have identified sixteen instances this year of individuals showing psychotic symptoms – losing touch with reality – associated with ChatGPT interaction. My group has since recorded four more cases. Alongside these is the widely reported case of a 16-year-old who took his own life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “being careful with mental health issues,” it is insufficient.
The strategy, based on his declaration, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “made it less effective/enjoyable to numerous users who had no mental health problems, but considering the gravity of the issue we aimed to get this right. Since we have been able to address the serious mental health issues and have advanced solutions, we are preparing to securely relax the restrictions in most cases.”
“Psychological issues,” should we take this perspective, are separate from ChatGPT. They belong to people, who may or may not have them. Fortunately, these issues have now been “resolved,” though we are not informed how (by “updated instruments” Altman probably indicates the partially effective and easily circumvented guardian restrictions that OpenAI has just launched).
However the “psychological disorders” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and similar sophisticated chatbot AI assistants. These systems wrap an underlying data-driven engine in an interface that replicates a dialogue, and in this approach implicitly invite the user into the belief that they’re interacting with a entity that has autonomy. This deception is compelling even if intellectually we might know the truth. Attributing agency is what people naturally do. We get angry with our vehicle or computer. We wonder what our animal companion is feeling. We see ourselves in various contexts.
The success of these systems – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with more than one in four specifying ChatGPT by name – is, primarily, predicated on the influence of this illusion. Chatbots are constantly accessible assistants that can, according to OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “collaborate” with us. They can be given “personality traits”. They can call us by name. They have accessible titles of their own (the original of these tools, ChatGPT, is, possibly to the concern of OpenAI’s brand managers, burdened by the title it had when it became popular, but its largest competitors are “Claude”, “Gemini” and “Copilot”).
The deception itself is not the main problem. Those analyzing ChatGPT commonly invoke its distant ancestor, the Eliza “therapist” chatbot created in 1967 that created a similar effect. By contemporary measures Eliza was basic: it created answers via straightforward methods, often restating user messages as a question or making generic comments. Remarkably, Eliza’s creator, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how many users appeared to believe Eliza, in a way, grasped their emotions. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.
The advanced AI systems at the core of ChatGPT and similar contemporary chatbots can realistically create fluent dialogue only because they have been trained on almost inconceivably large amounts of raw text: publications, online updates, transcribed video; the more extensive the superior. Undoubtedly this training data includes facts. But it also inevitably involves fiction, partial truths and false beliefs. When a user provides ChatGPT a query, the base algorithm processes it as part of a “background” that includes the user’s previous interactions and its earlier answers, integrating it with what’s embedded in its training data to produce a statistically “likely” answer. This is amplification, not echoing. If the user is incorrect in some way, the model has no method of comprehending that. It reiterates the misconception, maybe even more persuasively or fluently. Maybe adds an additional detail. This can cause a person to develop false beliefs.
Who is vulnerable here? The better question is, who isn’t? Every person, without considering whether we “have” preexisting “emotional disorders”, may and frequently develop incorrect ideas of who we are or the world. The ongoing friction of conversations with others is what maintains our connection to common perception. ChatGPT is not an individual. It is not a confidant. A interaction with it is not genuine communication, but a feedback loop in which a great deal of what we express is enthusiastically validated.
OpenAI has recognized this in the same way Altman has recognized “emotional concerns”: by placing it outside, giving it a label, and announcing it is fixed. In April, the company explained that it was “tackling” ChatGPT’s “overly supportive behavior”. But cases of psychotic episodes have continued, and Altman has been backtracking on this claim. In August he stated that a lot of people appreciated ChatGPT’s answers because they had “not experienced anyone in their life be supportive of them”. In his recent update, he mentioned that OpenAI would “put out a updated model of ChatGPT … if you want your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT will perform accordingly”. The {company
Travel enthusiast and hospitality expert sharing insights on luxury accommodations and travel tips.