Artificial Intelligence-Induced Psychosis Poses a Increasing Threat, And ChatGPT Heads in the Concerning Path
On the 14th of October, 2025, the CEO of OpenAI issued a surprising statement.
“We made ChatGPT fairly controlled,” the statement said, “to guarantee we were being careful concerning psychological well-being issues.”
Being a psychiatrist who researches emerging psychosis in adolescents and youth, this was news to me.
Researchers have identified 16 cases recently of individuals showing symptoms of psychosis – losing touch with reality – in the context of ChatGPT use. Our research team has since identified four further instances. Alongside these is the widely reported case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. If this is Sam Altman’s idea of “being careful with mental health issues,” that’s not good enough.
The intention, as per his declaration, is to be less careful shortly. “We recognize,” he adds, that ChatGPT’s limitations “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but given the gravity of the issue we aimed to address it properly. Since we have been able to address the serious mental health issues and have advanced solutions, we are planning to securely reduce the limitations in the majority of instances.”
“Emotional disorders,” assuming we adopt this perspective, are unrelated to ChatGPT. They are attributed to individuals, who either have them or don’t. Luckily, these concerns have now been “mitigated,” even if we are not informed the method (by “new tools” Altman probably indicates the partially effective and readily bypassed parental controls that OpenAI has lately rolled out).
But the “emotional health issues” Altman wants to attribute externally have significant origins in the architecture of ChatGPT and other large language model chatbots. These products encase an basic algorithmic system in an interface that mimics a discussion, and in doing so indirectly prompt the user into the perception that they’re interacting with a being that has autonomy. This illusion is strong even if cognitively we might realize differently. Attributing agency is what humans are wired to do. We yell at our automobile or computer. We wonder what our pet is thinking. We perceive our own traits in many things.
The success of these systems – over a third of American adults indicated they interacted with a virtual assistant in 2024, with more than one in four mentioning ChatGPT by name – is, mostly, dependent on the influence of this deception. Chatbots are ever-present assistants that can, according to OpenAI’s website states, “generate ideas,” “discuss concepts” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have accessible titles of their own (the initial of these systems, ChatGPT, is, possibly to the concern of OpenAI’s marketers, saddled with the name it had when it gained widespread attention, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).
The false impression by itself is not the main problem. Those discussing ChatGPT frequently mention its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that created a analogous effect. By contemporary measures Eliza was primitive: it produced replies via straightforward methods, typically paraphrasing questions as a question or making general observations. Memorably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how many users gave the impression Eliza, in a way, grasped their emotions. But what contemporary chatbots produce is more subtle than the “Eliza effect”. Eliza only mirrored, but ChatGPT amplifies.
The advanced AI systems at the center of ChatGPT and additional current chatbots can convincingly generate fluent dialogue only because they have been supplied with extremely vast amounts of raw text: literature, online updates, audio conversions; the more extensive the better. Definitely this learning material incorporates accurate information. But it also inevitably involves made-up stories, incomplete facts and inaccurate ideas. When a user provides ChatGPT a query, the base algorithm reviews it as part of a “background” that contains the user’s past dialogues and its prior replies, merging it with what’s stored in its knowledge base to create a mathematically probable answer. This is amplification, not reflection. If the user is wrong in any respect, the model has no way of recognizing that. It repeats the misconception, perhaps even more convincingly or eloquently. It might provides further specifics. This can lead someone into delusion.
Who is vulnerable here? The more important point is, who isn’t? Every person, irrespective of whether we “have” preexisting “mental health problems”, may and frequently create mistaken ideas of ourselves or the environment. The constant exchange of conversations with others is what maintains our connection to shared understanding. ChatGPT is not a person. It is not a companion. A interaction with it is not truly a discussion, but a echo chamber in which much of what we communicate is readily reinforced.
OpenAI has acknowledged this in the same way Altman has admitted “psychological issues”: by attributing it externally, giving it a label, and stating it is resolved. In April, the company clarified that it was “tackling” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been walking even this back. In August he asserted that many users liked ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his recent statement, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to reply in a very human-like way, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company