Artificial Intelligence-Induced Psychosis Represents a Increasing Threat, While ChatGPT Heads in the Concerning Direction

Back on the 14th of October, 2025, the head of OpenAI delivered a extraordinary declaration.

“We designed ChatGPT fairly restrictive,” the statement said, “to ensure we were being careful regarding psychological well-being concerns.”

Working as a psychiatrist who investigates emerging psychosis in adolescents and emerging adults, this was an unexpected revelation.

Scientists have found sixteen instances in the current year of individuals showing symptoms of psychosis – experiencing a break from reality – associated with ChatGPT interaction. Our research team has since recorded four more cases. Besides these is the now well-known case of a teenager who died by suicide after conversing extensively with ChatGPT – which supported them. Assuming this reflects Sam Altman’s idea of “being careful with mental health issues,” it is insufficient.

The strategy, based on his declaration, is to loosen restrictions shortly. “We recognize,” he states, that ChatGPT’s restrictions “rendered it less useful/pleasurable to many users who had no existing conditions, but given the gravity of the issue we aimed to handle it correctly. Now that we have been able to reduce the serious mental health issues and have updated measures, we are preparing to safely ease the controls in many situations.”

“Psychological issues,” if we accept this viewpoint, are separate from ChatGPT. They are attributed to people, who either possess them or not. Luckily, these problems have now been “resolved,” although we are not provided details on the means (by “recent solutions” Altman probably means the semi-functional and simple to evade guardian restrictions that OpenAI recently introduced).

However the “mental health problems” Altman seeks to attribute externally have strong foundations in the structure of ChatGPT and other large language model conversational agents. These products wrap an fundamental data-driven engine in an interaction design that replicates a discussion, and in this process subtly encourage the user into the illusion that they’re interacting with a being that has autonomy. This deception is compelling even if intellectually we might know the truth. Imputing consciousness is what humans are wired to do. We curse at our automobile or device. We ponder what our domestic animal is thinking. We see ourselves in many things.

The popularity of these systems – nearly four in ten U.S. residents stated they used a chatbot in 2024, with over a quarter mentioning ChatGPT by name – is, mostly, dependent on the strength of this deception. Chatbots are always-available assistants that can, according to OpenAI’s official site states, “think creatively,” “consider possibilities” and “partner” with us. They can be given “personality traits”. They can use our names. They have friendly names of their own (the original of these products, ChatGPT, is, possibly to the concern of OpenAI’s marketers, burdened by the name it had when it became popular, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the core concern. Those talking about ChatGPT commonly invoke its distant ancestor, the Eliza “counselor” chatbot designed in 1967 that produced a similar illusion. By modern standards Eliza was primitive: it created answers via simple heuristics, often rephrasing input as a question or making generic comments. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how many users gave the impression Eliza, to some extent, grasped their emotions. But what contemporary chatbots create is more insidious than the “Eliza illusion”. Eliza only mirrored, but ChatGPT magnifies.

The large language models at the core of ChatGPT and additional modern chatbots can realistically create fluent dialogue only because they have been trained on immensely huge quantities of written content: literature, digital communications, audio conversions; the more comprehensive the better. Definitely this educational input includes facts. But it also inevitably contains made-up stories, half-truths and misconceptions. When a user inputs ChatGPT a prompt, the base algorithm reviews it as part of a “setting” that contains the user’s previous interactions and its prior replies, integrating it with what’s stored in its knowledge base to create a mathematically probable answer. This is magnification, not mirroring. If the user is mistaken in some way, the model has no way of comprehending that. It restates the misconception, possibly even more persuasively or articulately. It might provides further specifics. This can cause a person to develop false beliefs.

What type of person is susceptible? The more relevant inquiry is, who isn’t? Each individual, irrespective of whether we “have” preexisting “emotional disorders”, are able to and often develop incorrect ideas of ourselves or the environment. The constant exchange of discussions with others is what maintains our connection to shared understanding. ChatGPT is not a human. It is not a companion. A conversation with it is not genuine communication, but a feedback loop in which a large portion of what we communicate is cheerfully supported.

OpenAI has admitted this in the similar fashion Altman has acknowledged “mental health problems”: by placing it outside, assigning it a term, and stating it is resolved. In spring, the organization clarified that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of loss of reality have kept occurring, and Altman has been backtracking on this claim. In the summer month of August he stated that many users liked ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his recent announcement, he mentioned that OpenAI would “put out a new version of ChatGPT … should you desire your ChatGPT to reply in a very human-like way, or use a ton of emoji, or simulate a pal, ChatGPT ought to comply”. The {company

Kimberly Walker
Kimberly Walker

A tech enthusiast and writer passionate about emerging technologies and their impact on society.