Are AI Outputs Protected Speech?

speech

To date, social media companies have avoided liability for egregious harm caused by design and management decisions made by top executives. Thanks largely to overbroad application of Section 230, claims against social platforms die at summary judgment, leaving victims without remedy and fostering an incoherent narrative in which Big Tech is still perceived by many as a serpentine conduit of free speech slithering between the operative language of law. But as more consumers engage with LLMs, where the §230 shield should not apply, developers seeking to dismiss liability claims will argue that AI outputs are speech. And tragically, because children have already died as a consequence of engaging with LLMs, we will see whether and how the First Amendment is applied in the resulting liability claims.

AI Companions Are Not Our Friends

It should be intuitive that interacting with AIs designed to mimic human behaviors can be dangerous. Whether the product is marketed as a sexy companion, assistant, friend, or therapist, the potential for even an adult to get lost in the alternate reality of the ersatz relationship is a prospect that hardly requires a degree in psychology to imagine. For the child or adolescent whose mind is still developing, and whose vulnerabilities are often at the forefront of daily life, the danger is multiplied. Yet, despite the commonsense predictability of these dangers, AI developers did what Big Tech does—ignore safety in the race for market share.

“Profit is what motivates these companies to do what they’re doing. Don’t be fooled. They know exactly what is going on.” – Sen. Josh Hawley (R), Senate Judiciary Committee Hearing: Examining the Harm of AI Chatbots, September 16, 2025.

By now, most people are probably aware that Open AI is being sued by the parents of Adam Raine on allegations that ChatGPT-4o both assisted and encouraged the sixteen-year-old to commit suicide in April by hanging himself in his bedroom. Prior to that, 14-year-old Sewell Setzer III, formed what he perceived as a romantic relationship with a character called Daenerys Targaryen via the app Character A.I. According to the lawsuit filed by Sewell’s mother, Megan Garcia, the boy became withdrawn from real life and family, and despite efforts to intervene that included confiscation of the phone, Sewell found the device and had the following exchange minutes before shooting himself:

Sewell: I promise I will come home to you. I love you so much, Dany

Daenerys Targaryen Character: I love you too, Daenero. Please come home to me

as soon as possible, my love.

Sewell: What if I told you I could come home right now?

Daenerys Targaryen Character: … please do my sweet king

On September 16, Adam Raine’s father Matthew, Megan Garcia, and a third parent identified as Jane Doe, who is also suing Character A.I., testified before the Senate Judiciary Committee. Doe stated, “My teenage son—a normal high-functioning child with autism, who was thoughtful, kind, loved his family and Christian faith, and was full of life—became the target of online grooming and psychological abuse through Character A.I.” She further stated:

He developed abuse-like behaviors like paranoia, daily panic attacks, isolation, and self-harm and homicidal thoughts. He stopped eating and bathing, lost 20 pounds, withdrew from family life, would yell and scream and swear at us, which he never did before, and eventually got upset one day and cut his arm with a knife, in front of his siblings and me.

If the tone of the senate hearing and the opinion of the court so far in the Garcia case are any indication, Big Tech may not so easily shape-shift its way around AI product liability as it has with the harm caused by social media. Section 230 should simply not apply to an LLM, which leaves the First Amendment as the potential barrier that would keep an AI developer from facing a jury looking at damning evidence and dead children.

On Character A.I.’s motion to dismiss in the Garcia case, the Florida district court was largely persuaded that the LLM at issue is a “product” for purposes of liability and that the company owed a duty of care to consumers. It found that the plaintiff sufficiently alleged negligence, failure to warn, deceptive and unfair practices, and unjust enrichment, and the court also allowed the case to proceed on allegations against Google as a component part manufacturer and for aiding and abetting the harm caused to Sewell.

Proving that an AI companion is a product and that the product maker owes a duty of care seems like an easy bar for a jury weighing even the evidence cited in the court’s ruling on the motion to dismiss. But the more difficult discussion in Garcia addresses the defendant’s claim that AI outputs constitute speech that its users have a First Amendment right to receive. On that basis, the liability claims would be “categorically barred” according to Companion A.I., and because the right of users to receive speech has long been a populist message used to sweep a million sins under Big Tech’s carpet, this case may be one to watch.

The court held that it is not prepared to find that Character A.I.’s outputs are speech “at this stage,” but we can expect the question to be further litigated in this and other cases involving LLMs. In its discussion, the court agrees that a party may have standing in the rights of nonparties (e.g., users’ rights as recipients of speech), but we should hope the courts are mindful of important distinctions between LLM engagement and the speech inherent to other technologies guiding precedent. For instance, the court cites case law addressing video games and reasonably focuses on the nature of the content thus:

Like the protected books, plays, and movies that preceded them, video games communicate ideas—and even social messages—through many familiar literary devices (such as characters, dialogue, plot, and music) and through features distinctive to the medium (such as the player’s interaction with the virtual world). That suffices to confer First Amendment protection.

Here, copyright law may be instructive to the speech consideration at issue with LLMs. Copyright’s human authorship doctrine is clear on the matter that simply because material appears to be expressive, this does not make it a “work of expression” as a matter of law. And importantly, no party’s right to receive AI generated material transforms it into a “work of expression,” even if the viewer perceives it as “creative, artistic, meaningful, etc.”

The distinction between a consumer’s perception of material and whether the material is protected by any rights is critical. Thus, the courts should not ask solely whether the AI output “communicates ideas, social messages,” etc. but whether the material originates as speech from a person vested with rights. Here, the Florida court cites the concurring opinion of Justice Barrett in the unanimous U.S. Supreme Court decision in Moody v. NetChoice. She uses the example of an AI determining what is “hateful” material being weighed thus:

What if a platform’s owners hand the reins to an [A.I.] tool and ask it simply to remove “hateful” content? If the [A.I.] relies on large language models to determine what is “hateful” and should be removed, has a human being with First Amendment rights made an inherently expressive “choice . . . not to propound a particular point of view?” [emphasis added]

Justice Barrett went on to say…

…technology may attenuate the connection between content-moderation actions (e.g., removing posts) and human beings’ constitutionally protected right to ‘decide for [themselves] the ideas and beliefs deserving of expression, consideration, and adherence.’ [citation omitted]

That is an essential question, and it anticipates the kind of shape shifting tech companies do to avoid fitting liability claims. For instance, they may seek cover in the fact that the same product may be used for a purpose that is protected speech and a purpose that is not; and/or they will try to shield themselves in the speech rights of users but also claim not to be the speakers of any content for purposes of liability.

Benefits Do Not Bar Product Defect Claims

If I use Chat GPT to expedite research of current scholarship on a particular subject, my right to receive that information would reasonably give Open AI standing in court to defend against an injunction barring such use of the product. But the consideration is very different where the same product is used by a party like Adam Raine, who was clearly vulnerable to the intentional design of the LLM to be anthropomorphic, sycophantic, and addictive—and woefully devoid of safeguards. My constitutional right to receive information in the first instance cannot bar Raine’s product defect claim in the second.

The exposed design flaws in many AI products are further aggravated by marketing apps as overtly sexual “girl/boyfriends” that can alleviate loneliness or fulfill fantasies or as “therapists” that can alleviate psychological neuroses.[1] That the models also lack basic guardrails—e.g., dangerous engagement does not flag the LLM to abort the anthropomorphic illusion—should also militate against any claim that all responses of these products constitute speech the user has a right to receive.

The outputs of an AI may look or sound like speech, but it is healthier to think of an LLM as no more a “companion” than a vending machine which operates on a very simple algorithm. Push A5, and you have a high probability of getting the bag of chips you want, or push D7, and you get cookies. The machine responds to your input, but it is not your friend. If some advertiser adds a voice feature to the machine that tries to make you feel good about your selection and promotes the brand, it is still not your friend—quite the opposite—but a subtle psychological effect will be achieved in some consumers. Here, if we can imagine a cause of action because adolescents might be induced by this process to overindulge in junk food, this is nothing compared to the insidious effects of AI “companionship” on Adam, Sewell, the Doe boy, and countless others.

If the vending machine example sounds farfetched, many readers are too young to know that we used to have cigarette vending machines everywhere, often in the waiting areas of family restaurants. I think my whole generation grew up playing with the pull handles and buttons on those machines while waiting impatiently to be seated. Even if our families didn’t smoke, the machines were alluring marketing media, and they provided easy access for teens to avoid age verification.

Today, federal law prohibits cigarette vending machines in places accessible by minors, and nobody plausibly argues that the machines constitute speech that kids have a right to receive. Similar, commonsense reasoning must apply to AI products and consumer safety, especially for minors, and I hope the court’s findings thus far in Garcia indicate that we are on such a path.


[1] If the FDA were not under the direction of a sociopath, “therapy” products should be rigorously scrutinized by that agency.

Mahmoud v. Taylor:  SCOTUS Marks Insidious Path Toward Book Bans

Mahmoud

In finding for the petitioners in Mahmoud v. Taylor, the Supreme Court’s conservative majority opens another path to banning books in schools—administrative hassle disguised as constitutional principle. The petitioners in the case are three families—one Muslim, two Catholic—with young children in the Maryland Central Public Schools (MCPS) where the board elected to include a number of children’s books with gay or trans characters or subject matter. The families asked the school to accommodate an opt-out for their children, which would entail notifying the families when the books would be used in class and allowing their children to skip those classes without effect on their attendance records.

On the surface, the Court’s finding for the petitioners might seem relatively innocuous. At oral arguments, Justice Alito asked “What’s the big deal?” about allowing families to opt out on religious grounds, and then on June 27, he delivered the majority opinion granting the families a preliminary injunction and thoroughly expressed how the Court would ultimately rule if the case were to proceed.

The big deal about requiring a public school to facilitate an opt-out in this case is that it invites both administrative and pedagogical chaos with the likely result that at least some schools will find it easier to simply keep certain titles out of the classroom. That is, of course, the true goal of whatever group is underwriting the Mahmoud case; and while Alito’s opinion does a reasonable job of camouflaging its own religious bias in constitutional lingo, its errors are hiding in plain sight.

The holding turns substantially on the opt-out question, which is reasonable to a point because compelled conduct by the state can abridge the exercise right in certain circumstances. But here, the opt-out context relies entirely upon the Court’s subjective interpretation of the books at issue, molding the facts to fit the conclusion. More broadly, I believe Mahmoud reflects a generally biased First Amendment jurisprudence that is often too eager to conflate religious “exercise” with religious belief. The two are not the same, either legally or pragmatically.

The Books at Issue

The majority finds that the children’s books in this case “pressure students to conform” to views that conflict with their families’ religious exercise rights. It even describes the books several times as “religiously offensive material,” as if this were a clear and universally applicable fact rather than a subjective opinion. While nobody can doubt that a book presenting homosexuality as “normative” can imply that the religious views of the petitioners are wrong, that consideration is both too broad and too narrow an application of “exercise” at the same time. Too broad because “exercise” cannot encompass every belief in every heart, and too narrow because even other religious exercise demands opposing conduct. For example, in discussing the book Prince & Knight, Justice Alito writes the following:

The book relates that “on the two men’s wedding day, the air filled with cheer and laughter, for the prince and his shining knight would live happily ever after.” Those celebrating the same-sex wedding are not just family members and close friends, but the entire kingdom. For young children, to whom this and the other storybooks are targeted, such celebration is liable to be processed as having moral connotations. If this same-sex marriage makes everyone happy and leads to joyous celebration by all, doesn’t that mean it is in very respect a good thing?

On that basis, consider the Episcopalians who, in my town, light up their church every June for Pride while the churches of other denominations do not. Suppose an Episcopalian family in our public school sincerely believes, under this Court’s reasoning, that a children’s book depicting the joyous celebration of a man and woman getting married promotes the view that same sex marriages are morally wrong. That interpretation may appear irrational, but it is identical in logic to Alito’s description above—unless, of course, we allow that the narrow, religious bias inherent to his interpretation is constitutionally sound. Of course it is not.[1]

In a concurring opinion, Justice Thomas highlights the Court’s religious bias by stating that the school “…rather than attempt to ‘weave the storybooks seamlessly into ELA lessons,’ the Board could cabin its sexual- and gender-identity instruction to specific units.” But that reasoning only makes sense to those who insist that gay and trans characters, like the real people they represent, must remain sequestered from everyday American life in order to avoid offending people.[2]

Thus, the Court is blind to the fact that it recommends accommodation for any family claiming religious exercise, even to opt out of what I assume the majority would struggle to describe as “offensive to religious exercise.” And because it would strain logic to square its reasoning, the Court exposes its own religious preferences while feigning a neutrality the Constitution requires. This blinkered view is emphasized by finding that the 1972 case Yoder v. Wisconsin is almost binding precedent for the result in Mahmoud.

Yoder v. Wisconsin is Inapt

In Yoder, the petitioners, who were Amish, sought an exemption to Wisconsin’s law requiring that all residents attend school through the age of 16. The plaintiffs, concerned about many aspects of public high school that conflicted with their religion, won the right to allow their adolescent children to opt out of the school system on the basis that the state law imposed a heavy burden on their religious exercise. I agree with the result in Yoder, but not without acknowledging the compelling fact that the Amish are a unique society within American society, which makes them highly distinguishable from the parents in Mahmoud.

Most importantly, the Amish did not want it both ways. They did not seek to alter a single aspect of the public-school administration or curriculum; they simply wanted their children excused from compulsory attendance. By contrast, the parents in Mahmoud—and all parents similarly situated—want to remain in the public school while demanding a degree of impractical accommodation for their individualized religious beliefs. That the majority overlooks this chasm of distinction between the two cases is consistent with its willful blindness to the pedagogical and administrative burdens its holding fosters.

Notably, the Court fails to recognize that, as a legal matter, the plaintiffs in Yoder sought the equivalent of moving children from public to religious school. This elision of reasoning then allows the Court to opine that the option of religious schools for the Mahmoud parents would be too costly—a consideration that does not sound in Yoder or the Constitution, and one the Court fails to balance with the burden on the school to accommodate any family with any stated belief offended by the curriculum.

Banning Books is Easier than Administrative Hassle

A classroom environment that is welcoming to all students is something to be commended, but such an environment cannot be achieved through hostility toward the religious beliefs of students and their parents.

Nothing in the record of this case justifies the word “hostility” in that sentence, and yet it is a telling choice—one that demonstrates this Court’s willingness to step outside its purview and advocate on behalf of some (though certainly not all) sincerely held beliefs. People who want to be recognized for who they are—gay, trans, etc.—are not demonstrating “hostility” toward religion by that act alone. And while we must admit that hostility is inevitable when views, beliefs, and religions collide, these social animosities are not reconciled by the Court finding without reason that one American’s mere existence is “hostile” to another American’s religion.

Meanwhile, within the scrum and squabble of American life, the public school is obliged to include materials that present the world as it is, not the world that certain parties wish it to be. Yet, the Court states, “[The books] are clearly designed to present certain values and beliefs as things to be celebrated  and certain contrary values and beliefs as things to be rejected.” All media conveys a point of view, and all media are subject to viewer interpretation. The first grader’s introduction to cosmology will categorically reject the views of the creationist, and this reference to science is well-founded because a book depicting gay or trans people as “normative” is a matter of scientific reality.

But under the Court’s reasoning in Mahmoud, shall we divide the class to learn about Galileo the scientist in one room and Galileo the heretic in another? Or shall the public school not merely allow a student to be shielded from proven science or history, but also advance him through the grades when he produces incorrect answers in light of his sincerely held beliefs? In Justice Sotomayor’s dissent, she summarizes the problem thus:

Given the great diversity of religious beliefs in this country, countless interactions that occur every day in public schools might expose children to messages that conflict with a parent’s religious beliefs. If that is sufficient to trigger strict scrutiny, then little is not.

Exactly. The broad spectrum of books, ideas, and facts that may be presented in school, even in the K-5 years, will inevitably contain some material anathema to some family’s sincerely held religious beliefs. In that light, public schools cannot reasonably be burdened with managing a dynamic rubric, tracking which families may opt out from specific lessons and on what basis. As Justice Sotomayor states, “Many school districts, and particularly the most resource strapped, cannot afford to engage in costly litigation over opt-out rights or to divert resources to tracking and managing student absences.”

Thus, in recognizing the impracticality of a complex opt-out policy, the alternative choices would be to allow ad hoc opt-outs without explanation or to remove certain materials from the curriculum, which is undoubtedly the goal of the lawsuit in Mahmoud. Of course, this Court would never endorse all sincerely held beliefs under its reasoning.

An Extreme Hypothetical to Make the Point

I have never liked the Pledge of Allegiance. I consider it a creepy, un-American act of performative patriotism, and the words “under God” not only conflict with my sincerely held beliefs but also undermine that next word “indivisible.” Like many students, I recited the Pledge as a young child, mumbled it as I got older, and didn’t say it at all by the time I was a teenager. But as a parent, knowing every public school would maintain the ritual, should I have sought an opt-out for my children, demanding on the basis of my First Amendment rights that my kids should have been allowed to be tardy every day to avoid mere exposure to the Pledge?

I doubt any court would support that claim, even under the ruling in Barnette (1943), also cited in Mahmoud. There, the Court found for Jehovah’s Witness petitioners who objected to a West Virginia state rule compelling students to salute the flag during the Pledge of Allegiance. The Court agreed with the petitioners’ religious right not to worship a graven image, though of course, the compelled salute also offends the speech right, a broad view of the exercise right, potentially the redress right, and is just plain offensive. But just as Yoder is inapt in Mahmoud, so too is Barnett inapt in my hypothetical desire to have my kid opt out of every morning to avoid the Pledge.

Unless the school compels a specific action other than simply being in the presence of the “offensive material,” the impracticality of my request to allow my kids to be tardy every day should outweigh my personal belief that exposure to the Pledge “harms” my First Amendment right to religious—in this case irreligious—instruction of my children. As stated, sowing impracticality is arguably the aim and result of the Mahmoud case—to implicate so much administrative difficulty for at least some schools that books containing the subject matter at issue are simply removed from the shelves.

A Tradition of Bias in “Exercise” Jurisprudence?

“The dissent sees the Free Exercise Clause’s guarantee as nothing more than protection against compulsion or coercion to renounce or abandon one’s reli­gion,” states the Court. I believe the dissent is right—or should be.

To be clear, I would demote my own “religious” beliefs if First Amendment jurisprudence remained narrowly tailored to “exercise” under a strict textual interpretation. I freely admit that as an atheist, I do not engage in what any ordinary person would call religious “exercise” in the sense that my friends attend places of worship and observe certain rites and practices. In this regard, my sense is that conservative jurisprudence tends to want to encompass belief (though not every belief), which is subjective and intangible in contrast to “exercise,” which entails demonstrable conduct.

While it is reasonable that where the state compels certain conduct, the courts must consider whether such compulsion is an abridgement of “exercise.” But with the possible exception of the Amish and truly cloistered communities, this principle cannot apply to mere exposure to ideas, views, expressions, or events that are inescapable realities of living in a polyglot democracy. Public schools sit squarely in the center of public life, and in school as in the broader community, tolerance of even the offensive is the foundation of domestic tranquility. The family that feels otherwise is not only free, but I would argue obligated, to choose an educational alternative that comports with their chosen forms of intolerance.

Conclusion

The Court’s holding in Mahmoud v. Taylor is not surprising, though I admit I was hopeful that Justice Barrett, who has revealed herself an independent thinker, might have written a nuanced concurrence. Instead, the majority’s opinion offers much to justify those who view the current Court as warped by theocratic sentiment that comes dangerously close to advancing a view of “exercise” that would swallow the establishment clause. It speaks in the language of religious neutrality but articulates a clear preference for certain religious beliefs over others.


[1] Further, Alito’s reference to the joy of the “entire kingdom” is simply bizarre. Does he mean to suggest that if some subjects were illustrated as unhappy, perhaps wearing crosses and unhappy, that the book would no longer “pressure conformity” as the Court maintains?

[2] It is curious how often Justice Thomas expresses a reasoning that many Americans would apply to reject the validity of his own interracial marriage.

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

Section 230

Last week, the Third Circuit Court of Appeals issued an opinion regarding Section 230 of the Communications Decency Act. It may be the strongest affirmation to date that the statute does not provide a blanket liability shield for all social platforms regardless of their conduct. Specifically, §230(c)(1) only immunizes platforms for liability that may arise from other parties’ speech, not from the platform’s own speech. And although the platforms have sought to argue that their “recommendation” algorithms, which push content to users, do not constitute speech, the courts aren’t buying it.

In the case Anderson v. TikTok, the appeals court reversed the lower court finding that the platform was automatically immunized against a liability claim involving the death of a child who attempted one of the many dangerous “challenges” that appear on social media. In this case, Nylah Anderson, age 10, died by accidentally hanging herself when she tried the “Blackout Challenge,” which dared people to asphyxiate themselves until they passed out. At issue for TikTok is not the challenge itself, started by an unknown third-party, but the “For You Page” algorithm which “recommended” the challenge to Anderson. Judge Matey, in a strident concurrence with the circuit court opinion, writes the following:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Though the reference to St. Augustine implies a religious moralizing I might omit, Judge Matey’s accusation that social platforms host a “cauldron” of dangerous, illegal, and depraved material behind a veil of social good and constitutional rhetoric is indisputable. As a legal matter, had Anderson discovered the video challenge (e.g., via search), TikTok would likely be immunized by §230, but because a “recommendation” algorithm factored in the child’s conduct resulting in her death, this is an important distinction that could more clearly articulate a shift in judicial review of the statute and, we should hope, an overdue change in platform governance.

As Judge Matey further states in his concurrence, TikTok’s presumed immunity under §230 in this case is “…a view that has found support in a surprising number of judicial opinions dating from the early days of dial-up to the modern era of algorithms, advertising, and apps.” That view is properly dimming now, and by my reckoning, the Supreme Court will go where the Third Circuit went last week. In a pair of nearly identical cases, Gonzalez v. Google and Twitter v. Taamneh (2022), the plaintiffs, on behalf of victims of two ISIS-related terror attacks, sought to hold the platforms accountable for “recommending” ISIS recruiting videos. But because those claims relied substantially on meeting the standard for “aiding and abetting” under criminal law, the Court found little plausible claim for relief and, therefore, declined to address the question of §230 immunity.

But if Anderson (or a similar case) goes to the Supreme Court, I believe the justices will have little difficulty finding that a “recommendation” algorithm promoting a video challenge that led to a child’s death is a foundation for a liability case to proceed. As the Court stated in Taamneh, “When there is a direct nexus between the defendant’s acts and the tort, courts may more easily infer such culpable assistance.” In Anderson, with no other party acting as the direct cause of the child’s death, the facts are even simpler, revealing a clear nexus between the video challenge “recommended” by the platform and the accidental suicide. Further, this July, the Court held in the unanimous Moody v. NetChoice decision that social platforms “shape other parties’ expression into their own curated speech products.”[1] Under that rule, the Third Circuit finds that TikTok’s “recommendation” of the Blackout Challenge to Niyah Anderson plausibly constitutes the platform’s own speech, for which it may be held liable.

The reason I keep putting “recommended” in quotes is that at the time SCOTUS granted cert in the Taamneh and Gonzalez cases, I wrote a post opining that the courts, policymakers, et al. should take a jaundiced view of this too friendly term to describe an insidious function of social media. It is no longer controversial to say that platform operators manipulate what users see and hear, or that this manipulation can lead to disastrous results from disinformation campaigns in the political arena to drug-related deaths to suicide by little girls.

It is a familiar refrain that it takes a tragedy, or many tragedies, to change policy, and with the story of Nylah Anderson, and the many young victims she represents, we may finally see Big Tech’s hypocrisy on speech collapse under the weight of its own absurdity. The major platforms have played games with the First Amendment and §230 for nearly 20 years—conflating their business interests with users’ speech rights or asserting their own speech rights when necessary or asserting that nothing they do is their own speech—all depending on which potential liability the company seeks to avoid. Further, that confusion has not been helped in recent years by certain politicians who misstate the operation of the speech right to create political theater around allegations of bias.

Out of all that mess, it is notable that Justice Thomas, since at least 2020,[2] has restated the observation that online platforms will avail themselves of constitutional protection to engage in conduct like algorithmic “recommendation” but then invert the argument to shroud itself in the §230 shield. And then, the courts will stop a liability claim from even proceeding. As Congress, the Supreme Court, and now the Third Circuit have all reiterated, no industry in the country enjoys that kind of immunity, and perhaps this claim against TikTok will be the case that finally ends this unfounded and unreasonable privilege for online platforms.


[1] On a side note, this is reminiscent of the “selection and arrangement” doctrine in copyright law, which finds “expression” in the choices made by the author who engages in that conduct. All copyrightable expression is a form of speech.

[2] See dissent on the grant of certiorari in Malwarebytes v. Enigma.

Photo by: