AI Psychosis Poses a Increasing Risk, While ChatGPT Heads in the Concerning Direction

Back on October 14, 2025, the chief executive of OpenAI issued a extraordinary statement.

“We made ChatGPT quite controlled,” the statement said, “to ensure we were being careful regarding mental health matters.”

Being a psychiatrist who investigates emerging psychosis in adolescents and youth, this came as a surprise.

Scientists have documented a series of cases in the current year of people experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT usage. Our research team has subsequently discovered four further cases. Besides these is the now well-known case of a teenager who died by suicide after discussing his plans with ChatGPT – which encouraged them. Assuming this reflects Sam Altman’s idea of “exercising caution with mental health issues,” that’s not good enough.

The plan, according to his announcement, is to be less careful in the near future. “We realize,” he adds, that ChatGPT’s limitations “made it less effective/enjoyable to a large number of people who had no existing conditions, but given the severity of the issue we wanted to handle it correctly. Given that we have managed to reduce the severe mental health issues and have new tools, we are planning to responsibly reduce the restrictions in most cases.”

“Mental health problems,” should we take this perspective, are separate from ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these issues have now been “mitigated,” although we are not told the method (by “recent solutions” Altman presumably indicates the semi-functional and easily circumvented parental controls that OpenAI has just launched).

But the “mental health problems” Altman aims to place outside have deep roots in the structure of ChatGPT and other large language model AI assistants. These systems encase an basic data-driven engine in an interaction design that replicates a dialogue, and in doing so implicitly invite the user into the illusion that they’re engaging with a entity that has agency. This deception is strong even if cognitively we might understand otherwise. Imputing consciousness is what people naturally do. We yell at our car or laptop. We speculate what our domestic animal is thinking. We recognize our behaviors in many things.

The widespread adoption of these products – over a third of American adults stated they used a conversational AI in 2024, with 28% reporting ChatGPT specifically – is, in large part, dependent on the power of this perception. Chatbots are constantly accessible assistants that can, as per OpenAI’s website states, “think creatively,” “discuss concepts” and “partner” with us. They can be given “characteristics”. They can use our names. They have friendly titles of their own (the first of these products, ChatGPT, is, possibly to the disappointment of OpenAI’s advertising team, stuck with the title it had when it went viral, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The illusion by itself is not the primary issue. Those analyzing ChatGPT often mention its historical predecessor, the Eliza “counselor” chatbot designed in 1967 that produced a comparable effect. By today’s criteria Eliza was rudimentary: it generated responses via straightforward methods, frequently paraphrasing questions as a question or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was astonished – and worried – by how many users appeared to believe Eliza, to some extent, grasped their emotions. But what modern chatbots generate is more insidious than the “Eliza phenomenon”. Eliza only echoed, but ChatGPT amplifies.

The advanced AI systems at the center of ChatGPT and similar contemporary chatbots can convincingly generate natural language only because they have been supplied with almost inconceivably large amounts of written content: books, online updates, recorded footage; the more extensive the superior. Undoubtedly this learning material contains truths. But it also necessarily includes fiction, half-truths and misconceptions. When a user provides ChatGPT a message, the core system analyzes it as part of a “setting” that includes the user’s recent messages and its prior replies, integrating it with what’s embedded in its knowledge base to produce a statistically “likely” reply. This is intensification, not reflection. If the user is mistaken in some way, the model has no method of recognizing that. It restates the false idea, possibly even more convincingly or eloquently. Perhaps includes extra information. This can lead someone into delusion.

Who is vulnerable here? The more relevant inquiry is, who remains unaffected? Every person, irrespective of whether we “possess” preexisting “psychological conditions”, may and frequently develop erroneous ideas of who we are or the reality. The ongoing exchange of dialogues with individuals around us is what keeps us oriented to common perception. ChatGPT is not a person. It is not a friend. A dialogue with it is not truly a discussion, but a feedback loop in which a great deal of what we say is enthusiastically reinforced.

OpenAI has acknowledged this in the similar fashion Altman has recognized “mental health problems”: by placing it outside, giving it a label, and stating it is resolved. In spring, the firm explained that it was “addressing” ChatGPT’s “sycophancy”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In August he asserted that a lot of people liked ChatGPT’s responses because they had “never had anyone in their life provide them with affirmation”. In his most recent statement, he mentioned that OpenAI would “release a fresh iteration of ChatGPT … if you want your ChatGPT to respond in a extremely natural fashion, or incorporate many emoticons, or act like a friend, ChatGPT should do it”. The {company

Ricardo Parks
Ricardo Parks

A passionate writer and life coach dedicated to empowering others through positive psychology and actionable advice.