To date, social media companies have avoided liability for egregious harm caused by design and management decisions made by top executives. Thanks largely to overbroad application of Section 230, claims against social platforms die at summary judgment, leaving victims without remedy and fostering an incoherent narrative in which Big Tech is still perceived by many as a serpentine conduit of free speech slithering between the operative language of law. But as more consumers engage with LLMs, where the §230 shield should not apply, developers seeking to dismiss liability claims will argue that AI outputs are speech. And tragically, because children have already died as a consequence of engaging with LLMs, we will see whether and how the First Amendment is applied in the resulting liability claims.
AI Companions Are Not Our Friends
It should be intuitive that interacting with AIs designed to mimic human behaviors can be dangerous. Whether the product is marketed as a sexy companion, assistant, friend, or therapist, the potential for even an adult to get lost in the alternate reality of the ersatz relationship is a prospect that hardly requires a degree in psychology to imagine. For the child or adolescent whose mind is still developing, and whose vulnerabilities are often at the forefront of daily life, the danger is multiplied. Yet, despite the commonsense predictability of these dangers, AI developers did what Big Tech does—ignore safety in the race for market share.
“Profit is what motivates these companies to do what they’re doing. Don’t be fooled. They know exactly what is going on.” – Sen. Josh Hawley (R), Senate Judiciary Committee Hearing: Examining the Harm of AI Chatbots, September 16, 2025.
By now, most people are probably aware that Open AI is being sued by the parents of Adam Raine on allegations that ChatGPT-4o both assisted and encouraged the sixteen-year-old to commit suicide in April by hanging himself in his bedroom. Prior to that, 14-year-old Sewell Setzer III, formed what he perceived as a romantic relationship with a character called Daenerys Targaryen via the app Character A.I. According to the lawsuit filed by Sewell’s mother, Megan Garcia, the boy became withdrawn from real life and family, and despite efforts to intervene that included confiscation of the phone, Sewell found the device and had the following exchange minutes before shooting himself:
Sewell: I promise I will come home to you. I love you so much, Dany
Daenerys Targaryen Character: I love you too, Daenero. Please come home to me
as soon as possible, my love.
Sewell: What if I told you I could come home right now?
Daenerys Targaryen Character: … please do my sweet king
On September 16, Adam Raine’s father Matthew, Megan Garcia, and a third parent identified as Jane Doe, who is also suing Character A.I., testified before the Senate Judiciary Committee. Doe stated, “My teenage son—a normal high-functioning child with autism, who was thoughtful, kind, loved his family and Christian faith, and was full of life—became the target of online grooming and psychological abuse through Character A.I.” She further stated:
He developed abuse-like behaviors like paranoia, daily panic attacks, isolation, and self-harm and homicidal thoughts. He stopped eating and bathing, lost 20 pounds, withdrew from family life, would yell and scream and swear at us, which he never did before, and eventually got upset one day and cut his arm with a knife, in front of his siblings and me.
If the tone of the senate hearing and the opinion of the court so far in the Garcia case are any indication, Big Tech may not so easily shape-shift its way around AI product liability as it has with the harm caused by social media. Section 230 should simply not apply to an LLM, which leaves the First Amendment as the potential barrier that would keep an AI developer from facing a jury looking at damning evidence and dead children.
On Character A.I.’s motion to dismiss in the Garcia case, the Florida district court was largely persuaded that the LLM at issue is a “product” for purposes of liability and that the company owed a duty of care to consumers. It found that the plaintiff sufficiently alleged negligence, failure to warn, deceptive and unfair practices, and unjust enrichment, and the court also allowed the case to proceed on allegations against Google as a component part manufacturer and for aiding and abetting the harm caused to Sewell.
Proving that an AI companion is a product and that the product maker owes a duty of care seems like an easy bar for a jury weighing even the evidence cited in the court’s ruling on the motion to dismiss. But the more difficult discussion in Garcia addresses the defendant’s claim that AI outputs constitute speech that its users have a First Amendment right to receive. On that basis, the liability claims would be “categorically barred” according to Companion A.I., and because the right of users to receive speech has long been a populist message used to sweep a million sins under Big Tech’s carpet, this case may be one to watch.
The court held that it is not prepared to find that Character A.I.’s outputs are speech “at this stage,” but we can expect the question to be further litigated in this and other cases involving LLMs. In its discussion, the court agrees that a party may have standing in the rights of nonparties (e.g., users’ rights as recipients of speech), but we should hope the courts are mindful of important distinctions between LLM engagement and the speech inherent to other technologies guiding precedent. For instance, the court cites case law addressing video games and reasonably focuses on the nature of the content thus:
Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world). That suffices to confer First Amendment protection.
Here, copyright law may be instructive to the speech consideration at issue with LLMs. Copyright’s human authorship doctrine is clear on the matter that simply because material appears to be expressive, this does not make it a “work of expression” as a matter of law. And importantly, no party’s right to receive AI generated material transforms it into a “work of expression,” even if the viewer perceives it as “creative, artistic, meaningful, etc.”
The distinction between a consumer’s perception of material and whether the material is protected by any rights is critical. Thus, the courts should not ask solely whether the AI output “communicates ideas, social messages,” etc. but whether the material originates as speech from a person vested with rights. Here, the Florida court cites the concurring opinion of Justice Barrett in the unanimous U.S. Supreme Court decision in Moody v. NetChoice. She uses the example of an AI determining what is “hateful” material being weighed thus:
What if a platform’s owners hand the reins to an [A.I.] tool and ask it simply to remove “hateful” content? If the [A.I.] relies on large language models to determine what is “hateful” and should be removed, has a human being with First Amendment rights made an inherently expressive “choice . . . not to propound a particular point of view?” [emphasis added]
Justice Barrett went on to say…
…technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right to ‘decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence.’ [citation omitted]
That is an essential question, and it anticipates the kind of shape shifting tech companies do to avoid fitting liability claims. For instance, they may seek cover in the fact that the same product may be used for a purpose that is protected speech and a purpose that is not; and/or they will try to shield themselves in the speech rights of users but also claim not to be the speakers of any content for purposes of liability.
Benefits Do Not Bar Product Defect Claims
If I use Chat GPT to expedite research of current scholarship on a particular subject, my right to receive that information would reasonably give Open AI standing in court to defend against an injunction barring such use of the product. But the consideration is very different where the same product is used by a party like Adam Raine, who was clearly vulnerable to the intentional design of the LLM to be anthropomorphic, sycophantic, and addictive—and woefully devoid of safeguards. My constitutional right to receive information in the first instance cannot bar Raine’s product defect claim in the second.
The exposed design flaws in many AI products are further aggravated by marketing apps as overtly sexual “girl/boyfriends” that can alleviate loneliness or fulfill fantasies or as “therapists” that can alleviate psychological neuroses.[1] That the models also lack basic guardrails—e.g., dangerous engagement does not flag the LLM to abort the anthropomorphic illusion—should also militate against any claim that all responses of these products constitute speech the user has a right to receive.
The outputs of an AI may look or sound like speech, but it is healthier to think of an LLM as no more a “companion” than a vending machine which operates on a very simple algorithm. Push A5, and you have a high probability of getting the bag of chips you want, or push D7, and you get cookies. The machine responds to your input, but it is not your friend. If some advertiser adds a voice feature to the machine that tries to make you feel good about your selection and promotes the brand, it is still not your friend—quite the opposite—but a subtle psychological effect will be achieved in some consumers. Here, if we can imagine a cause of action because adolescents might be induced by this process to overindulge in junk food, this is nothing compared to the insidious effects of AI “companionship” on Adam, Sewell, the Doe boy, and countless others.
If the vending machine example sounds farfetched, many readers are too young to know that we used to have cigarette vending machines everywhere, often in the waiting areas of family restaurants. I think my whole generation grew up playing with the pull handles and buttons on those machines while waiting impatiently to be seated. Even if our families didn’t smoke, the machines were alluring marketing media, and they provided easy access for teens to avoid age verification.
Today, federal law prohibits cigarette vending machines in places accessible by minors, and nobody plausibly argues that the machines constitute speech that kids have a right to receive. Similar, commonsense reasoning must apply to AI products and consumer safety, especially for minors, and I hope the court’s findings thus far in Garcia indicate that we are on such a path.
[1] If the FDA were not under the direction of a sociopath, “therapy” products should be rigorously scrutinized by that agency.








Leave a Reply