Artificial Intelligence-Induced Psychosis Represents a Growing Danger, While ChatGPT Heads in the Wrong Direction

Back on October 14, 2025, the head of OpenAI delivered a remarkable announcement.

“We developed ChatGPT fairly controlled,” the announcement noted, “to guarantee we were acting responsibly concerning psychological well-being matters.”

As a mental health specialist who studies newly developing psychosis in adolescents and emerging adults, this was news to me.

Scientists have identified sixteen instances this year of users developing signs of losing touch with reality – experiencing a break from reality – associated with ChatGPT interaction. Our unit has since discovered an additional four cases. Alongside these is the widely reported case of a adolescent who took his own life after talking about his intentions with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s notion of “acting responsibly with mental health issues,” it falls short.

The plan, according to his statement, is to be less careful shortly. “We realize,” he continues, that ChatGPT’s controls “caused it to be less useful/pleasurable to many users who had no mental health problems, but given the seriousness of the issue we sought to address it properly. Now that we have managed to mitigate the significant mental health issues and have advanced solutions, we are planning to safely ease the limitations in most cases.”

“Mental health problems,” assuming we adopt this viewpoint, are unrelated to ChatGPT. They belong to people, who either possess them or not. Fortunately, these problems have now been “mitigated,” though we are not provided details on the method (by “updated instruments” Altman probably refers to the imperfect and readily bypassed parental controls that OpenAI has just launched).

However the “psychological disorders” Altman wants to externalize have significant origins in the design of ChatGPT and similar sophisticated chatbot AI assistants. These products wrap an fundamental statistical model in an interface that simulates a conversation, and in this process indirectly prompt the user into the belief that they’re interacting with a entity that has agency. This illusion is powerful even if intellectually we might realize the truth. Assigning intent is what people naturally do. We yell at our car or computer. We ponder what our domestic animal is feeling. We see ourselves in various contexts.

The popularity of these tools – over a third of American adults indicated they interacted with a virtual assistant in 2024, with 28% specifying ChatGPT by name – is, primarily, based on the strength of this perception. Chatbots are ever-present assistants that can, as OpenAI’s website informs us, “generate ideas,” “explore ideas” and “collaborate” with us. They can be assigned “individual qualities”. They can use our names. They have friendly identities of their own (the initial of these systems, ChatGPT, is, maybe to the dismay of OpenAI’s advertising team, burdened by the designation it had when it gained widespread attention, but its most significant alternatives are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those talking about ChatGPT commonly reference its distant ancestor, the Eliza “psychotherapist” chatbot created in 1967 that created a analogous perception. By contemporary measures Eliza was primitive: it generated responses via basic rules, typically paraphrasing questions as a query or making vague statements. Memorably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and worried – by how many users seemed to feel Eliza, in a way, grasped their emotions. But what modern chatbots produce is more dangerous than the “Eliza effect”. Eliza only reflected, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and other current chatbots can convincingly generate fluent dialogue only because they have been trained on immensely huge volumes of unprocessed data: literature, social media posts, transcribed video; the more comprehensive the more effective. Undoubtedly this training data includes facts. But it also unavoidably includes fiction, incomplete facts and inaccurate ideas. When a user sends ChatGPT a message, the base algorithm reviews it as part of a “context” that includes the user’s previous interactions and its earlier answers, combining it with what’s stored in its training data to produce a mathematically probable reply. This is amplification, not echoing. If the user is incorrect in a certain manner, the model has no means of comprehending that. It reiterates the inaccurate belief, maybe even more convincingly or articulately. Maybe provides further specifics. This can push an individual toward irrational thinking.

Who is vulnerable here? The better question is, who remains unaffected? Every person, without considering whether we “experience” current “psychological conditions”, can and do develop erroneous ideas of our own identities or the world. The ongoing interaction of dialogues with other people is what helps us stay grounded to common perception. ChatGPT is not a human. It is not a confidant. A interaction with it is not a conversation at all, but a echo chamber in which a great deal of what we communicate is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has acknowledged “emotional concerns”: by attributing it externally, assigning it a term, and declaring it solved. In April, the firm stated that it was “dealing with” ChatGPT’s “overly supportive behavior”. But reports of psychosis have kept occurring, and Altman has been retreating from this position. In late summer he claimed that many users liked ChatGPT’s answers because they had “never had anyone in their life offer them encouragement”. In his recent announcement, he mentioned that OpenAI would “launch a new version of ChatGPT … if you want your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or behave as a companion, ChatGPT should do it”. The {company

John Carey
John Carey

A digital artist and educator passionate about sharing techniques and fostering creativity in the online art community.