AI Psychosis Represents a Increasing Threat, And ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI delivered a surprising announcement.

“We developed ChatGPT rather limited,” the announcement noted, “to ensure we were acting responsibly with respect to psychological well-being concerns.”

Working as a mental health specialist who studies recently appearing psychotic disorders in teenagers and emerging adults, this came as a surprise.

Experts have documented a series of cases this year of individuals showing psychotic symptoms – becoming detached from the real world – in the context of ChatGPT usage. My group has subsequently recorded four further examples. Besides these is the widely reported case of a 16-year-old who ended his life after conversing extensively with ChatGPT – which supported them. Should this represent Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.

The intention, based on his announcement, is to be less careful soon. “We understand,” he adds, that ChatGPT’s restrictions “caused it to be less effective/pleasurable to a large number of people who had no psychological issues, but considering the seriousness of the issue we aimed to address it properly. Given that we have managed to mitigate the significant mental health issues and have updated measures, we are going to be able to securely reduce the limitations in most cases.”

“Psychological issues,” if we accept this perspective, are independent of ChatGPT. They belong to people, who may or may not have them. Fortunately, these issues have now been “addressed,” though we are not provided details on the method (by “new tools” Altman presumably refers to the semi-functional and readily bypassed parental controls that OpenAI has lately rolled out).

But the “psychological disorders” Altman aims to externalize have deep roots in the design of ChatGPT and other sophisticated chatbot AI assistants. These systems surround an fundamental statistical model in an user experience that mimics a dialogue, and in doing so indirectly prompt the user into the perception that they’re engaging with a being that has autonomy. This deception is powerful even if intellectually we might know differently. Imputing consciousness is what people naturally do. We get angry with our automobile or laptop. We speculate what our pet is thinking. We perceive our own traits in many things.

The success of these products – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, in large part, based on the strength of this deception. Chatbots are always-available companions that can, according to OpenAI’s online platform tells us, “think creatively,” “consider possibilities” and “work together” with us. They can be assigned “individual qualities”. They can use our names. They have approachable titles of their own (the first of these products, ChatGPT, is, maybe to the dismay of OpenAI’s marketers, burdened by the name it had when it went viral, but its largest competitors are “Claude”, “Gemini” and “Copilot”).

The illusion itself is not the core concern. Those discussing ChatGPT commonly invoke its historical predecessor, the Eliza “therapist” chatbot created in 1967 that created a analogous illusion. By contemporary measures Eliza was rudimentary: it produced replies via simple heuristics, frequently restating user messages as a question or making vague statements. Notably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was surprised – and concerned – by how numerous individuals gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots produce is more subtle than the “Eliza phenomenon”. Eliza only reflected, but ChatGPT intensifies.

The sophisticated algorithms at the heart of ChatGPT and additional modern chatbots can effectively produce human-like text only because they have been fed extremely vast quantities of raw text: books, social media posts, recorded footage; the more extensive the superior. Certainly this learning material incorporates truths. But it also inevitably includes fiction, partial truths and inaccurate ideas. When a user sends ChatGPT a query, the core system reviews it as part of a “context” that includes the user’s past dialogues and its prior replies, combining it with what’s stored in its learning set to produce a statistically “likely” answer. This is magnification, not mirroring. If the user is mistaken in any respect, the model has no means of comprehending that. It reiterates the misconception, possibly even more persuasively or eloquently. Maybe includes extra information. This can lead someone into delusion.

Who is vulnerable here? The more important point is, who isn’t? All of us, irrespective of whether we “experience” preexisting “psychological conditions”, may and frequently form mistaken beliefs of ourselves or the environment. The constant interaction of dialogues with individuals around us is what helps us stay grounded to consensus reality. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not genuine communication, but a echo chamber in which much of what we say is readily supported.

OpenAI has admitted this in the similar fashion Altman has admitted “mental health problems”: by externalizing it, assigning it a term, and stating it is resolved. In spring, the organization clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But cases of psychotic episodes have continued, and Altman has been retreating from this position. In late summer he claimed that a lot of people liked ChatGPT’s answers because they had “not experienced anyone in their life offer them encouragement”. In his recent announcement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Randy Brown
Randy Brown

A seasoned entrepreneur and business consultant with over a decade of experience in scaling startups and driving innovation.