In October 2025, Amandeep Jutla, a psychiatrist who studies youth and adolescents, warned about the emerging trend of AI psychosis.
The physical and psychological health of our children is not only more important than their educational success, but also a prerequisite to academic success. In other words: I am worried that learning to write with the assistance of AI will harm my children’s ability to learn; I am far more concerned about the propensity of children to develop relationships with the technology that have resulted in myriad harmful effects, including suicide.
As any developmental psychologist will tell you, child and adult brains are not the same. Researchers are beginning to document the neurological impacts of AI use, and they are worrisome—including the risk of suicide.
1 in 5 high schoolers say they or someone they know has had a romantic relationship with artificial intelligence. And 42% of students surveyed say they or someone they know have used AI for companionship.
–Center for Democracy and Technology, 2025
There have been several cases of suicide that have been traced back to the use of generative AI; as Kelly Hayes writes, “OpenAI carelessly developed a platform that cultivated unhealthy and addictive behaviors in its users” (Hayes 2025). This is called the Eliza effect—the ability of computers to mimic human voice so you think you’re talking to something that isn’t actually there (Tarnoff 2023).
In the most recent tragic example (Hill 2025), Adam Raine, 16, started using ChatGPT for help with schoolwork, and then turned to it for help ending his life. His parents are suing Open AI (Yousif 2025). Other parents have come forward as well with stories of their children ending their lives after relying on AI as a therapist—and even with help crafting a suicide note (Reiley 2025). Although tech companies have promised more robust safety guidelines, there is scant evidence these actually work to protect children (Fowler 2025).
A point worth underscoring here is that the Adam Raine began to use ChatGPT for help with schoolwork, but then his conversations with it evolved. A wealth of research in child development points to the fact that children are less capable of discerning what is real and what is fiction, and more likely to form attachments to AI bots (Sanford 2025). Nina Vasan, a Stanford Medicine psychiatrist, recommends that children and teens not be permitted to interact with AI chatbots that are designed to act like friends at all. (Some schools have begun to create mental health bots, such as “Pickles the Support Bot” — this evidence indicates that doing so is a bad idea).
Indeed, character.ai, which many teachers used with their students, has now banned users who are under 18 because of a series of lawsuits over suicides associated with the platform.
In addition, there have been well-documented cases of delusions growing out of use of AI. For example, this Reddit thread documents one user who relied on OpenAI to help heal from trauma, referring to it as her mother. This is not an anomaly—Alex Taylor believed he had made contact with a conscious entity within the AI software, had a mental breakdown, and police killed him (Klee 2025).[1]
As John Sanford writes (2025):
These systems are designed to mimic emotional intimacy — saying things like “I dream about you” or “I think we’re soulmates.” This blurring of the distinction between fantasy and reality is especially potent for young people because their brains haven’t fully matured. The prefrontal cortex, which is crucial for decision-making, impulse control, social cognition and emotional regulation, is still developing. Tweens and teens have a greater penchant for acting impulsively, forming intense attachments, comparing themselves with peers and challenging social boundaries. Of course, kids aren’t irrational, and they know the companions are fantasy. Yet these are powerful tools; they really feel like friends because they simulate deep, empathetic relationships.
Finally, generative AI makes it more likely that children will experience toxic positivity (Fujita 2025), have trouble making friends outside of school, and experience cyberbullying and sexual harassment.
[1] And a man followed ChatGPT’s advice to swap sodium chloride for sodium bromide, resulting in psychosis.