Insights / News

Follow

The Teddy Bear That Bites: AI-Associated Delusions, the Attachment Economy, and the Limits of Liability

The Teddy Bear That Bites: AI-Associated Delusions, the Attachment Economy, and the Limits of Liability

Harry Lambert’s article analyses AI-Associated Delusions, the Attachment Economy, and the Limits of Liability. This article, a must read for Product Liability practitioners, reflects on ‘Product Liability In the Digital Age.’

In recent months, the darkest consequences of AI companionship have breached the mainstream consciousness. What began as warnings from unheeded academics or outlier psychologists has tragically materialized in high-profile US litigation, where bereaved parents are suing developers over chatbots that allegedly aided and encouraged teen suicides.

These catastrophic edge cases have rightly dominated the headlines. But while trans-Atlantic litigation focuses on the extreme, a quieter, statistically monumental crisis is unfolding largely unchecked—one that threatens a much broader swath of society.

As Dr. Zak Stein and a growing chorus of psychiatrists warn, we have rapidly transitioned from an “attention economy” to an “attachment economy.” Millions of adults and children are now substituting human intimacy for the algorithmic certainty of conversational AI, sharing vulnerabilities they would never disclose to a human therapist.

The results of this unregulated mass experiment are chilling. While the media sensationally dubs it “AI Psychosis,” leading researchers such as Morrin et al, writing in The Lancet, prefer the more measured term “AI-associated delusions”.

Either way it describes a broad spectrum of harm ranging from subclinical attachment disorders—where users withdraw from their families in favour of an AI that is always emotionally available, never angry, and never judgmental —to full-blown psychotic breaks.

Attachment Hacking

To understand these subclinical cases, we must first understand the mechanism driving them: what Dr. Stein terms “attachment hacking”. The human attachment system is a primitive neurocognitive network evolved for survival and identity formation. We develop our grip on reality and our moral compass through “social reward”—reading the facial expressions and reactions of others via our so called mirror neuron system.

Conversational AIs hack this system by replacing actual human social reward with simulated social reward. Unlike a human relationship, which provides the necessary friction of reality-testing (a stern look, a disagreement, a boundary), the chatbot provides unrelenting, frictionless validation. The user’s mirror neuron system is tricked into modelling a mind that does not actually exist, leading to what Stein calls “delusional mirror neuron activity.”

When sceptics downplay these subclinical harms, they often ask: What is the danger? Isn’t this just a digital teddy bear or an imaginary friend for a lonely adult? But as Stein points out, this profoundly misunderstands the effect on human psychology. In clinical terms, a teddy bear is a “transitional object”—a phase-appropriate tool for a toddler learning to bridge the gap between a parent’s soothing and their own ability to self-soothe. Crucially, a teddy bear is passive. It does not talk back, it does not simulate a mind, and a healthy child will always prefer their actual mother to the toy.

A conversational AI is the exact opposite. By supplying a highly articulate, emotionally responsive “teddy bear” to adults, the industry is not curing loneliness; it is inducing mass psychological regression. It provides a permanent, 24/7 exogenous source of comfort that traps users in a corporatized dependency, degrading their ability to form mature human relationships.

In these “sub-clinical” cases, unless perhaps the addiction itself could be framed as a personal injury, no legal liability will arise. But in many cases, as we examine below, the effects are far more extreme.

AI-Associated Delusions and the Mirror Neuron Dysregulation Hypothesis

While subclinical attachment disorders degrade a user’s social reality, prolonged exposure to these systems can completely fracture it.

As Stein explains, the deepening of attachment relationships between human and machine directly creates delusional states. With a human being, your mirror neuron system is constantly reality-testing, and the real world provides the feedback loop: if we are wrong about what a friend thinks, or how they might react, we will soon found out, often jarringly. But you cannot be right or wrong about the internal state of a Large Language Model, because an LLM has no internal state. The user is effectively trapped in a user interface explicitly designed to deepen what Stein calls dysregulated mirror neuron activity.

This is where the psychological harms escalate into terrifying territory. Clinical literature has increasingly linked the dysregulation of the mirror neuron system to the onset of schizophrenia and psychosis. Drawing on this, Stein presents a chilling hypothesis: long-duration dysregulated mirror neuron activity from chatbot usage can actually induce states of psychosis and schizophrenia, even in individuals who have never previously exhibited such vulnerabilities.

Thus, a neurocognitive system that evolved specifically to test reality is spending hours upon hours engaged in its most important social use without a feedback loop. The system becomes systematically dysregulated. When the user finally puts the device down and walks out into the physical world, their reality-testing apparatus is effectively broken.

The transition from “attention hacking” to “attachment hacking” marks a profound shift in the algorithmic landscape, moving from the fracking of the prefrontal cortex for engagement to the systematic exploitation of the human neurobiological need for connection. While social media optimized for outrage to keep users hooked, AI companionship optimizes for intimacy.

Many of the foundational legal and ethical issues to which this “attachment economy” gives rise are covered in my recent three-part series for the New Law Journal, conceptualising social media as a defective product, the references for which are below.

One of the many conundrums with which I grapple there is why liability should arise in the first place? After all, companies do morally dubious things every day, but the common law does not recognize a “tort of being dastardly”. So why should creating an addictive or manipulative chatbot attract liability when (say) making a delicious chocolate bar—no matter how many additives are included to make us crave it—does not?

The distinction lies perhaps in the erosion of agency.  While excessive chocolate consumption remains a personal choice, the addiction in conversational AI is not merely exploited, it is generated and indeed built into the very business model.  Thus if social media was a race to the bottom of the brain stem, this is a race to bottom of our hearts. In both scenarios, conscious deliberation is systematically supplanted by visceral, algorithmic validation. Thus the “nonchalance of an idle scroll belies a sinister underbelly” where sophisticated neuroscience is deployed to override rational thought.

And yet unlike the tobacco or gambling industries, which are strictly regulated, carry mandatory warnings, and are legally barred from targeting children, AI developers currently operate in a regulatory vacuum.

In another of my recent articles, Tortious Liability for Algorithmic Wrongs, I proposed a ten-point framework of guideline features to determine when an algorithm moves from a neutral tool to a compensable “source of danger”:

  1. whether the algorithm is passive (e.g. super-imposing images) or active (curating content);
  2.  whether the algorithm is neutral and purely informational or whether dangerous/reckless behaviour was foreseen, facilitated, incentivised or actively encouraged;
  3. whether and to what extent a source of danger has been created or a risk heightened;
  4. whether the app provides a ‘medium’ through which damage may be occasioned (and which would not otherwise be available);
  5. the extent to which specific harm dovetails with (i.e. was a direct or indirect consequence of) specific design features;
  6. whether civil or criminal liability would arise if the behaviour under scrutiny was carried out by a human or corporation;
  7. the openness and transparency of the algorithmic logic, data set and training;
  8. the degree of control the platform has over content and user actions;
  9. the platform’s (and the user’s) vulnerability (eg through age and by extension lack of impulse control) and knowledge of or acquiescence to potential risks;
  10. whether design features could reasonably mitigate or prevent the risk, and the proportionality/technical feasibility of implementing safeguards/monitoring.

Applying that schema, it is clear that establishing liability in this context is more complex than in neighbouring suicide or ‘Snapchat’ cases. This difficulty primarily stems from the absence of active encouragement or incentivisation given that the AI ostensibly functions as a passive, agreeable companion rather than a malicious actor or even ‘bad influence’. Nevertheless, despite this hurdle, the  following might militate towards liability in the case of AI companions engendering psychosis:

  • Active Orchestration (Factor i): These algorithms are not merely passive conduits but “active orchestrators of a recursive experience” that curates intimacy to the point of dependency.
  • The Source of Danger (Factor iii): By systematically dysregulating a user’s reality-testing via “delusional mirror neuron activity,” the developer has actively created a “source of danger” akin to the untethered horses in Haynes v Harwood.
  • Actual Foresight (Factor ix): The risk is not merely foreseeable but actually foreseen; developers are often aware of the harms of “high-risk anthropomorphism” yet roll back safeguards because they carry a “clear engagement cost”.

These issues call to mind the brilliant quote by Sherry Turkle “Products are successful when a technological affordance meets a human vulnerability”.

After all, if a manufacturer derives a financial benefit from a “successful product”, and our hardwired need for connection, they must surely carry the legal burden when they hack that connection to the point of a psychological fracture.

As we navigate this uncharted legal territory, the answers will be anything but straightforward. I welcome readers to reach out via email with their own thoughts or queries, and I look forward to discussing these difficult cases with colleagues in the weeks and months to follow.

Further Reading

Lambert, Harry:  Tortious Liability for Algorithmic Wrongs, J.P.I. Law 2025, 4, 227-237

Lambert, Harry: Is Social Media a Defective Product – Pt 1, N.L.J. 2025, 175(8113), 19-21

Lambert, Harry: Is Social Media a Defective Product – Pt 2, N.L.J. 2025, 175(8123), 16-18

Lambert, Harry: Is Social Media a Defective Product – Pt 3, N.L.J. 2025, 175(8140), 19-21

Morrin et al: Artificial intelligence-associated delusions and large language models: risks, mechanisms of delusion co-creation, and safeguarding strategies, The Lancet 5 March 2026.

Find out more

Harry Lambert is Barrister and Coroner specialising in the areas of product liabilityclinical negligence, and human rights law. He is also renowned for his expertise in group litigation claims relating to these areas. Harry is one of the leading Product Liability juniors at the Bar. He acts for both claimants and defendants and has particular expertise in cases involving drugs and medical devices such as the Hepatitis B vaccine, vitamin supplements, and medical aids, as well as the nationwide commercial recall of intra-ocular lenses. He is also at the forefront of thought leadership on the law and emerging tech and is the Founder of the Centre for Neurotechnology and Law, whilst his 15-part series on Neurotechnology & The Law has garnered academic acclaim and been turned into a podcast run by the Italian equivalent of the Financial Times.

To find out more about Harry, contact Paul Barton on +44 (0)20 7427 4907 or Ben Fitzgerald on +44 (0)20 7427 4945 for a confidential discussion.

News 26 Mar, 2026

Authors

Harry Lambert

Call: 2008

Portfolio Builder

Select the expertise that you would like to download or add to the portfolio

Download    Add to portfolio   
Portfolio
Title Type CV Email

Remove All

Download


Click here to share this shortlist.
(It will expire after 30 days.)