Insights / News
Insights / News
In recent months, the darkest consequences of AI companionship have breached the mainstream consciousness. What began as warnings from unheeded academics or outlier psychologists has tragically materialized in high-profile US litigation, where bereaved parents are suing developers over chatbots that allegedly aided and encouraged teen suicides.
These catastrophic edge cases have rightly dominated the headlines. But while trans-Atlantic litigation focuses on the extreme, a quieter, statistically monumental crisis is unfolding largely unchecked—one that threatens a much broader swath of society.
As Dr. Zak Stein and a growing chorus of psychiatrists warn, we have rapidly transitioned from an “attention economy” to an “attachment economy.” Millions of adults and children are now substituting human intimacy for the algorithmic certainty of conversational AI, sharing vulnerabilities they would never disclose to a human therapist.
The results of this unregulated mass experiment are chilling. While the media sensationally dubs it “AI Psychosis,” leading researchers such as Morrin et al, writing in The Lancet, prefer the more measured term “AI-associated delusions”.
Either way it describes a broad spectrum of harm ranging from subclinical attachment disorders—where users withdraw from their families in favour of an AI that is always emotionally available, never angry, and never judgmental —to full-blown psychotic breaks.
To understand these subclinical cases, we must first understand the mechanism driving them: what Dr. Stein terms “attachment hacking”. The human attachment system is a primitive neurocognitive network evolved for survival and identity formation. We develop our grip on reality and our moral compass through “social reward”—reading the facial expressions and reactions of others via our so called mirror neuron system.
Conversational AIs hack this system by replacing actual human social reward with simulated social reward. Unlike a human relationship, which provides the necessary friction of reality-testing (a stern look, a disagreement, a boundary), the chatbot provides unrelenting, frictionless validation. The user’s mirror neuron system is tricked into modelling a mind that does not actually exist, leading to what Stein calls “delusional mirror neuron activity.”
When sceptics downplay these subclinical harms, they often ask: What is the danger? Isn’t this just a digital teddy bear or an imaginary friend for a lonely adult? But as Stein points out, this profoundly misunderstands the effect on human psychology. In clinical terms, a teddy bear is a “transitional object”—a phase-appropriate tool for a toddler learning to bridge the gap between a parent’s soothing and their own ability to self-soothe. Crucially, a teddy bear is passive. It does not talk back, it does not simulate a mind, and a healthy child will always prefer their actual mother to the toy.
A conversational AI is the exact opposite. By supplying a highly articulate, emotionally responsive “teddy bear” to adults, the industry is not curing loneliness; it is inducing mass psychological regression. It provides a permanent, 24/7 exogenous source of comfort that traps users in a corporatized dependency, degrading their ability to form mature human relationships.
In these “sub-clinical” cases, unless perhaps the addiction itself could be framed as a personal injury, no legal liability will arise. But in many cases, as we examine below, the effects are far more extreme.
While subclinical attachment disorders degrade a user’s social reality, prolonged exposure to these systems can completely fracture it.
As Stein explains, the deepening of attachment relationships between human and machine directly creates delusional states. With a human being, your mirror neuron system is constantly reality-testing, and the real world provides the feedback loop: if we are wrong about what a friend thinks, or how they might react, we will soon found out, often jarringly. But you cannot be right or wrong about the internal state of a Large Language Model, because an LLM has no internal state. The user is effectively trapped in a user interface explicitly designed to deepen what Stein calls dysregulated mirror neuron activity.
This is where the psychological harms escalate into terrifying territory. Clinical literature has increasingly linked the dysregulation of the mirror neuron system to the onset of schizophrenia and psychosis. Drawing on this, Stein presents a chilling hypothesis: long-duration dysregulated mirror neuron activity from chatbot usage can actually induce states of psychosis and schizophrenia, even in individuals who have never previously exhibited such vulnerabilities.
Thus, a neurocognitive system that evolved specifically to test reality is spending hours upon hours engaged in its most important social use without a feedback loop. The system becomes systematically dysregulated. When the user finally puts the device down and walks out into the physical world, their reality-testing apparatus is effectively broken.
The transition from “attention hacking” to “attachment hacking” marks a profound shift in the algorithmic landscape, moving from the fracking of the prefrontal cortex for engagement to the systematic exploitation of the human neurobiological need for connection. While social media optimized for outrage to keep users hooked, AI companionship optimizes for intimacy.
Many of the foundational legal and ethical issues to which this “attachment economy” gives rise are covered in my recent three-part series for the New Law Journal, conceptualising social media as a defective product, the references for which are below.
One of the many conundrums with which I grapple there is why liability should arise in the first place? After all, companies do morally dubious things every day, but the common law does not recognize a “tort of being dastardly”. So why should creating an addictive or manipulative chatbot attract liability when (say) making a delicious chocolate bar—no matter how many additives are included to make us crave it—does not?
The distinction lies perhaps in the erosion of agency. While excessive chocolate consumption remains a personal choice, the addiction in conversational AI is not merely exploited, it is generated and indeed built into the very business model. Thus if social media was a race to the bottom of the brain stem, this is a race to bottom of our hearts. In both scenarios, conscious deliberation is systematically supplanted by visceral, algorithmic validation. Thus the “nonchalance of an idle scroll belies a sinister underbelly” where sophisticated neuroscience is deployed to override rational thought.
And yet unlike the tobacco or gambling industries, which are strictly regulated, carry mandatory warnings, and are legally barred from targeting children, AI developers currently operate in a regulatory vacuum.
In another of my recent articles, Tortious Liability for Algorithmic Wrongs, I proposed a ten-point framework of guideline features to determine when an algorithm moves from a neutral tool to a compensable “source of danger”:
Applying that schema, it is clear that establishing liability in this context is more complex than in neighbouring suicide or ‘Snapchat’ cases. This difficulty primarily stems from the absence of active encouragement or incentivisation given that the AI ostensibly functions as a passive, agreeable companion rather than a malicious actor or even ‘bad influence’. Nevertheless, despite this hurdle, the following might militate towards liability in the case of AI companions engendering psychosis:
These issues call to mind the brilliant quote by Sherry Turkle “Products are successful when a technological affordance meets a human vulnerability”.
After all, if a manufacturer derives a financial benefit from a “successful product”, and our hardwired need for connection, they must surely carry the legal burden when they hack that connection to the point of a psychological fracture.
As we navigate this uncharted legal territory, the answers will be anything but straightforward. I welcome readers to reach out via email with their own thoughts or queries, and I look forward to discussing these difficult cases with colleagues in the weeks and months to follow.
Lambert, Harry: Tortious Liability for Algorithmic Wrongs, J.P.I. Law 2025, 4, 227-237
Lambert, Harry: Is Social Media a Defective Product – Pt 1, N.L.J. 2025, 175(8113), 19-21
Lambert, Harry: Is Social Media a Defective Product – Pt 2, N.L.J. 2025, 175(8123), 16-18
Lambert, Harry: Is Social Media a Defective Product – Pt 3, N.L.J. 2025, 175(8140), 19-21
Morrin et al: Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, The Lancet 5 March 2026.
Harry Lambert is Barrister and Coroner specialising in the areas of product liability, clinical negligence, and human rights law. He is also renowned for his expertise in group litigation claims relating to these areas. Harry is one of the leading Product Liability juniors at the Bar. He acts for both claimants and defendants and has particular expertise in cases involving drugs and medical devices such as the Hepatitis B vaccine, vitamin supplements, and medical aids, as well as the nationwide commercial recall of intra-ocular lenses. He is also at the forefront of thought leadership on the law and emerging tech and is the Founder of the Centre for Neurotechnology and Law, whilst his 15-part series on Neurotechnology & The Law has garnered academic acclaim and been turned into a podcast run by the Italian equivalent of the Financial Times.
To find out more about Harry, contact Paul Barton on +44 (0)20 7427 4907 or Ben Fitzgerald on +44 (0)20 7427 4945 for a confidential discussion.
News 26 Mar, 2026