Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.

Are AI Outputs Protected Speech?

speech

To date, social media companies have avoided liability for egregious harm caused by design and management decisions made by top executives. Thanks largely to overbroad application of Section 230, claims against social platforms die at summary judgment, leaving victims without remedy and fostering an incoherent narrative in which Big Tech is still perceived by many as a serpentine conduit of free speech slithering between the operative language of law. But as more consumers engage with LLMs, where the §230 shield should not apply, developers seeking to dismiss liability claims will argue that AI outputs are speech. And tragically, because children have already died as a consequence of engaging with LLMs, we will see whether and how the First Amendment is applied in the resulting liability claims.

AI Companions Are Not Our Friends

It should be intuitive that interacting with AIs designed to mimic human behaviors can be dangerous. Whether the product is marketed as a sexy companion, assistant, friend, or therapist, the potential for even an adult to get lost in the alternate reality of the ersatz relationship is a prospect that hardly requires a degree in psychology to imagine. For the child or adolescent whose mind is still developing, and whose vulnerabilities are often at the forefront of daily life, the danger is multiplied. Yet, despite the commonsense predictability of these dangers, AI developers did what Big Tech does—ignore safety in the race for market share.

“Profit is what motivates these companies to do what they’re doing. Don’t be fooled. They know exactly what is going on.” – Sen. Josh Hawley (R), Senate Judiciary Committee Hearing: Examining the Harm of AI Chatbots, September 16, 2025.

By now, most people are probably aware that Open AI is being sued by the parents of Adam Raine on allegations that ChatGPT-4o both assisted and encouraged the sixteen-year-old to commit suicide in April by hanging himself in his bedroom. Prior to that, 14-year-old Sewell Setzer III, formed what he perceived as a romantic relationship with a character called Daenerys Targaryen via the app Character A.I. According to the lawsuit filed by Sewell’s mother, Megan Garcia, the boy became withdrawn from real life and family, and despite efforts to intervene that included confiscation of the phone, Sewell found the device and had the following exchange minutes before shooting himself:

Sewell: I promise I will come home to you. I love you so much, Dany

Daenerys Targaryen Character: I love you too, Daenero. Please come home to me

as soon as possible, my love.

Sewell: What if I told you I could come home right now?

Daenerys Targaryen Character: … please do my sweet king

On September 16, Adam Raine’s father Matthew, Megan Garcia, and a third parent identified as Jane Doe, who is also suing Character A.I., testified before the Senate Judiciary Committee. Doe stated, “My teenage son—a normal high-functioning child with autism, who was thoughtful, kind, loved his family and Christian faith, and was full of life—became the target of online grooming and psychological abuse through Character A.I.” She further stated:

He developed abuse-like behaviors like paranoia, daily panic attacks, isolation, and self-harm and homicidal thoughts. He stopped eating and bathing, lost 20 pounds, withdrew from family life, would yell and scream and swear at us, which he never did before, and eventually got upset one day and cut his arm with a knife, in front of his siblings and me.

If the tone of the senate hearing and the opinion of the court so far in the Garcia case are any indication, Big Tech may not so easily shape-shift its way around AI product liability as it has with the harm caused by social media. Section 230 should simply not apply to an LLM, which leaves the First Amendment as the potential barrier that would keep an AI developer from facing a jury looking at damning evidence and dead children.

On Character A.I.’s motion to dismiss in the Garcia case, the Florida district court was largely persuaded that the LLM at issue is a “product” for purposes of liability and that the company owed a duty of care to consumers. It found that the plaintiff sufficiently alleged negligence, failure to warn, deceptive and unfair practices, and unjust enrichment, and the court also allowed the case to proceed on allegations against Google as a component part manufacturer and for aiding and abetting the harm caused to Sewell.

Proving that an AI companion is a product and that the product maker owes a duty of care seems like an easy bar for a jury weighing even the evidence cited in the court’s ruling on the motion to dismiss. But the more difficult discussion in Garcia addresses the defendant’s claim that AI outputs constitute speech that its users have a First Amendment right to receive. On that basis, the liability claims would be “categorically barred” according to Companion A.I., and because the right of users to receive speech has long been a populist message used to sweep a million sins under Big Tech’s carpet, this case may be one to watch.

The court held that it is not prepared to find that Character A.I.’s outputs are speech “at this stage,” but we can expect the question to be further litigated in this and other cases involving LLMs. In its discussion, the court agrees that a party may have standing in the rights of nonparties (e.g., users’ rights as recipients of speech), but we should hope the courts are mindful of important distinctions between LLM engagement and the speech inherent to other technologies guiding precedent. For instance, the court cites case law addressing video games and reasonably focuses on the nature of the content thus:

Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world). That suffices to confer First Amendment protection.

Here, copyright law may be instructive to the speech consideration at issue with LLMs. Copyright’s human authorship doctrine is clear on the matter that simply because material appears to be expressive, this does not make it a “work of expression” as a matter of law. And importantly, no party’s right to receive AI generated material transforms it into a “work of expression,” even if the viewer perceives it as “creative, artistic, meaningful, etc.”

The distinction between a consumer’s perception of material and whether the material is protected by any rights is critical. Thus, the courts should not ask solely whether the AI output “communicates ideas, social messages,” etc. but whether the material originates as speech from a person vested with rights. Here, the Florida court cites the concurring opinion of Justice Barrett in the unanimous U.S. Supreme Court decision in Moody v. NetChoice. She uses the example of an AI determining what is “hateful” material being weighed thus:

What if a platform’s owners hand the reins to an [A.I.] tool and ask it simply to remove “hateful” content? If the [A.I.] relies on large language models to determine what is “hateful” and should be removed, has a human being with First Amendment rights made an inherently expressive “choice . . . not to propound a particular point of view?” [emphasis added]

Justice Barrett went on to say…

…technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right to ‘decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence.’ [citation omitted]

That is an essential question, and it anticipates the kind of shape shifting tech companies do to avoid fitting liability claims. For instance, they may seek cover in the fact that the same product may be used for a purpose that is protected speech and a purpose that is not; and/or they will try to shield themselves in the speech rights of users but also claim not to be the speakers of any content for purposes of liability.

Benefits Do Not Bar Product Defect Claims

If I use Chat GPT to expedite research of current scholarship on a particular subject, my right to receive that information would reasonably give Open AI standing in court to defend against an injunction barring such use of the product. But the consideration is very different where the same product is used by a party like Adam Raine, who was clearly vulnerable to the intentional design of the LLM to be anthropomorphic, sycophantic, and addictive—and woefully devoid of safeguards. My constitutional right to receive information in the first instance cannot bar Raine’s product defect claim in the second.

The exposed design flaws in many AI products are further aggravated by marketing apps as overtly sexual “girl/boyfriends” that can alleviate loneliness or fulfill fantasies or as “therapists” that can alleviate psychological neuroses.[1] That the models also lack basic guardrails—e.g., dangerous engagement does not flag the LLM to abort the anthropomorphic illusion—should also militate against any claim that all responses of these products constitute speech the user has a right to receive.

The outputs of an AI may look or sound like speech, but it is healthier to think of an LLM as no more a “companion” than a vending machine which operates on a very simple algorithm. Push A5, and you have a high probability of getting the bag of chips you want, or push D7, and you get cookies. The machine responds to your input, but it is not your friend. If some advertiser adds a voice feature to the machine that tries to make you feel good about your selection and promotes the brand, it is still not your friend—quite the opposite—but a subtle psychological effect will be achieved in some consumers. Here, if we can imagine a cause of action because adolescents might be induced by this process to overindulge in junk food, this is nothing compared to the insidious effects of AI “companionship” on Adam, Sewell, the Doe boy, and countless others.

If the vending machine example sounds farfetched, many readers are too young to know that we used to have cigarette vending machines everywhere, often in the waiting areas of family restaurants. I think my whole generation grew up playing with the pull handles and buttons on those machines while waiting impatiently to be seated. Even if our families didn’t smoke, the machines were alluring marketing media, and they provided easy access for teens to avoid age verification.

Today, federal law prohibits cigarette vending machines in places accessible by minors, and nobody plausibly argues that the machines constitute speech that kids have a right to receive. Similar, commonsense reasoning must apply to AI products and consumer safety, especially for minors, and I hope the court’s findings thus far in Garcia indicate that we are on such a path.


[1] If the FDA were not under the direction of a sociopath, “therapy” products should be rigorously scrutinized by that agency.

D.C. Event Shines Light on Advertisers Supporting Social Media Harm to Children

social media

When I was a kid in the 1970s and my father was a principal in an ad agency, they had the Ameritone paint account, and I remember him explaining that they were not allowed to show paint and food together in a commercial lest a child viewer be confused into thinking that paint might be edible. By contrast, a social media platform today is free to conflate child-focused material with illegal drug offers and numerous other conduits leading to serious harm or death. And it’s all swept under the rug of innovation and commerce.

Algorithms kill kids. Let’s just call it like it is at this point and stop pussyfooting around the rhetoric that social media platforms are neutral platforms for “information.” Never mind that information itself is almost a lost cause on social media, but the effects of algorithmic manipulation—even simple recommendations—can have disastrous effects for children and teens, including depression, anxiety, suicide, and accidental death. And that was before AI.

As reported last September, the accidental suicide of Nylah Anderson, age 10, was the result of TikTok’s algorithm prompting her to try the “blackout challenge,” which entails making a “game” of self-asphyxiation. In the case against TikTok for its role in leading Anderson toward the “blackout challenge,” the Third Circuit Court of Appeals articulated one of the few rational reads of the Section 230 liability shield. The court stated:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Brought to You by Your Favorite Brands

Add to that cauldron the major brands whose advertising dollars unconditionally support social platforms, and that was the focus of this morning’s event held at the National Press Club. “We saw a great turnout,” says cyber-analyst Eric Feinberg, who has been engaged on ad-supported toxic social media content since 2013. More than 40 attendees filled the 40-seat room for the kick-off event designed to focus the attention of major brands on the fact that their ad dollars finance platform operations that cause serious harm and death to children and teens.

The event was organized and hosted by parents who have been working to turn personal tragedy into social change through both public policy and private action. For instance, one mother who spoke was Debra Schmill, who started the Becca Schmill Foundation after losing her daughter Rebecca to fentanyl poisoning from pills obtained with the “help” of social media. Becca’s death was the culmination in a cascade of terrible events intersecting social platforms—beginning with a rape at the age of 15 that was followed by cyber-bullying and the consequent battle with depression that led to the fatal pills obtained online. Deb Schmill is one of many parents determined to prevent other children and families from suffering similar fates.

“Women make 70% to 80% of all purchasing decisions,” Feinberg explained to me by phone after the event, “and these mothers who spoke today recognize that mothers just like them are funding social media harm to their own children.” Posting his daily mantra that “Brands are buying while kids are dying,” Feinberg has recently taken swings at McDonalds for its crossover promotion with Snapchat…

He makes a solid point. If a major brand overtly promoted the opportunity for kids to get closer to the local drug dealer, pimp, or sexual predator, parents would be outraged. But because social media is an insidious free-for-all, inhabited by good and bad actors, the worst vices are either overlooked or accepted as the cost of obtaining the virtues. But this is a false choice. Multiple defectors from these companies have made clear that the platforms bend their own rules and tweak their algorithms to promote anything that drives “engagement,” without regard to the consequences. And they assume the mainstream advertisers will keep paying without condition because they own all that engagement.

But as Meta whistleblower Sarah Wynn-Williams describes in her book Careless People, that company made an affirmative decision to target known teenage psychological vulnerabilities (e.g., body image) to promote certain products. This abuse of the technology is already unethical—a far cry from not showing paint and food on the same screen—and advertisers who knowingly exploit the “opportunity” should be held accountable by consumers. Meanwhile, as the organizers of today’s event strive to emphasize, that same algorithm exploiting the teen’s vulnerabilities will just as readily push dangerous drugs toward the child as promote a makeup product or gym membership.

By my lights, asking the advertisers to partner with their own consumers—the parents who buy their products—to pressure the platforms to adopt better practices is the very least they can do. In just a couple of months, it will be time for the ~$40 billion Back-to-School season, and as brands vie for the K-12 parents who make those purchases, they owe it to those families to pressure the digital-age media companies to stop killing kids.