In Gonzalez v. Google, SCOTUS Should Look Beyond the Term “Recommendations”

In October, the Supreme Court granted cert in two cases that may limit the immunity granted to internet platforms under Section 230 of the Communications Decency Act. Both Gonzalez v. Google and Twitter v. Tamneh, arise from plaintiffs seeking to hold platforms accountable for “targeted recommendations” of material associated with acts of international terrorism, but in this post, I will only focus on the former case. Here’s a slightly truncated background as stated in the Gonzalez petition:

In November 2015 Nohemi Gonzalez, a 23-year-old U.S. citizen studying in Paris, France, was murdered when three ISIS terrorists fired into a crowd of diners at La Belle Equipe bistro. . . . Several of Ms. Gonzalez’s relatives, as well as her estate, subsequently brought this action against Google, . . . The plaintiffs alleged that Google, through YouTube, had provided material assistance to, and had aided and abetted, ISIS, conduct forbidden and made actionable by the AntiTerrorism Act.

Doubtless, the particulars of these cases raise complex questions of liability that even many critics of 230’s too-broadly applied immunity may have difficulty defending on all merits. Google’s response, for instance, states that the “ATA claims in this case have produced a procedural morass.” Nevertheless, the Court agreed to review, having denied all Section 230 petitions in the past, leaving some to note that Justice Thomas signaled a strong interest in Section 230 immunity in a brief respecting denial of certiorari in the 2020 case Malwarebytes v. Enigma Software Group. There, Thomas wrote:

Adopting the too-common practice of reading extra immunity into statutes where it does not belong, courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms. . . . Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.

I will leave it to others to discuss whether Gonzalez is the right vehicle to address the most chronic harms caused by overbroad readings of 230—or to speculate exactly what this Supreme Court is looking to achieve in light of the politicized narratives and misstatements that have attached to public discussion about the statute.

Until the Trump administration turned the White House into the Ministry of Misinformation, Section 230 was not mainstream news, and one consequence of those events is that the provision has been misrepresented as a content (i.e., political) neutrality law, which it is not. Though, as discussed in the post linked above, the neutrality rhetoric is a misconception Big Tech promoted itself years before Members of Congress started alleging “anti-conservative” bias and conflating that talking point with threats to abolish Section 230.

But I wanted to focus on the narrow question presented in the Gonzalez petition, which is whether “targeted recommendations” made by interactive computer services are properly immunized. Because whatever the outcome of this case—and if there is any chance that Congress might effectively amend Section 230—both the Court and lawmakers should reject the too-friendly term “recommendation” to describe how algorithms on major platforms are designed to attract and retain user attention.

It is now a matter of record that algorithms trained to adapt to user behavior and feed what may be our worst instincts is an often-toxic phenomenon that is not adequately described by the word “recommendation.” Interaction between the social platform and the human user is not comparable to reading a book review or hearing a friend’s suggestion to see a show or even having Netflix indicate that if you liked movie A, you might like movie B. These positive social transactions are analogized by Big Tech to describe its systems and models in the same way the industry invokes other socially constructive words like “share,” “connect,” and “democratize” while papering over hazards like IP theft, harassment, and the wildfire spread of misinformation.

Google’s Response Begs for Scrutiny

Notably, in Google’s response asking the Court to deny cert in Gonzalez, it practically admits to the insidious nature of algorithmic “recommendation” when it emphasizes the fact that the courts have held that search engines are protected by 230—and that search is comparable to “recommendation.” Here, Google inadvertently highlights the reason search sucks now—because rather than return results based on a reasonably objective definition of “relevance,” the Google search algorithm has been tweaked to return results “of likely interest” to the user based on what Google has learned about them.

I doubt I am alone in finding that search results are consistently less useful than they were just a few years ago—even to the extent that the most logical result (e.g., an entity’s website) appears on page two or three, where it used to at least be the first or second item below the top three paid placements. But on a darker note, Google’s brief practically acknowledges that if the user is an anti-vaxxer or an election denier or believer in some other conspiracy nonsense, they will be served search results likely to reinforce those false narratives. Whatever we want to call this phenomenon and its psychological influence, it is too quaint by some margin to call it “recommendation.”

But even if Google Search still functions in a way that is properly immunized by Section 230 (and I would question that as the technology changes), we confront a whole other level of insidious power to influence with the combination of Google or Facebook’s algorithms and the capacity of video to tap into emotions—especially strong emotions like anger and fear. The notion that the fundamental design of YouTube does not foster a symbiotic relationship between the potential terrorist and the recruiting video is barely plausible. But for sure, it is a phenomenon Congress did not consider in 1996 when it adopted Section 230.

Argus is Allegedly Blind

When it comes to marketing, Google et al. boast the capacity to know what a user is going to buy, how she’ll vote, or what she’ll order for dinner—even minutes before she knows these things herself. But when the conversation turns to liability, these same companies suddenly cannot know much of anything. While Google is probably correct that there are several complicating aspects in the Gonzalez complaint, it also downplays the efficacy of a platform like YouTube to convert latent emotions into dangerous action.

Whether that action is joining ISIS and murdering tourists or joining a mob and attacking the U.S. Capitol or breaking into the home of the Speaker and attacking her spouse, I think we have sufficient evidence to conclude that insane narratives are running amok and driving people to extreme behavior with deadly consequences. Google et al. may not bear direct responsibility for these events—surely, terrorism existed long before the internet—but neither are these platforms mere hapless conduits incidentally fueling the fire. And again, Google almost acknowledges this in its reply brief.

“…since the 2015 Paris attack, YouTube has overhauled its terrorism policies, as one of petitioners’ sources recognizes,” the brief states. Oddly, Google cites a WSJ story which reports that despite changes by the platform, YouTube still “Drives People to the Internet’s Darkest Corners.” More acutely, if YouTube attempted to change its algorithm and/or its policies in response to the Paris attacks, this suggests that a nexus does exist between platform “recommendation” and videos that are likely to motivate violent action. This level of interaction between user and machine, which serves the platform’s interest more than it does the public interest, was neither envisioned nor discussed at the time 230 was adopted.

Circa 1996, the analogies were limited to human publishers who make decisions about what to disseminate, cut, or edit. But those points of reference are woefully incomplete for understanding contemporary data mining and the manner in which algorithms produce real-world events. Thirty years ago, we were talking about this stuff with the expectation that the network might recognize that you’re in the market for a toaster and will show you some ads for toasters. But when toaster shopping feeds an advanced algorithm capable of intuiting that you might be interested in all the videos that will “prove” how the Jews are running the world or that Yoga is Satan worship, that is a very different creature than a “recommendation” machine.

So, as the Court considers whether “targeted recommendations” are properly immunized by Section 230, we should hope that it recognizes how tepid that term is for describing the state of the technology, which behaves nothing like Congress’s understanding nearly thirty years ago. Whatever the proper term should be, it is implausible that Congress intended to provide blanket immunity for a business model that, even occasionally, fuels riots, terrorism, harassment, nonconsensual pornography, rampant misinformation, and even genocide. Surely, these cannot be acceptable byproducts of the most ambitious or prosaic uses of the internet.

Professor Citron Proposes Civil Remedies for Violations of Intimate Privacy

At a panel hosted by The Reykjavik Dialogue,[1] during a discussion about law enforcement, justice, and sex discrimination, Mary Anne Franks, co-founder of the Cyber Civil Rights Initiative noted that when her organization asked perpetrators who had engaged in revenge porn what would have stopped them from doing it, the answer was almost universally, “If I thought I could go to jail for it.”

The act of distributing intimate, private images via the internet without permission of the persons depicted is a crime—one that causes ongoing harm to victims, including harassment and violence, destruction of interpersonal relationships, loss of employment opportunities, psychological disorder, and suicide. And thanks substantially to the efforts of Franks and her colleague at CCRI, Danielle Keats Citron, nearly every state has criminalized the act of nonconsensual distribution of intimate images; and a federal bill to do likewise, the SHIELD Act, passed the House in March as part of H.R. 1620.

But while these laws pave the way for prosecution of the individuals who engage in this conduct, they do nothing about removing these violations of intimate privacy from the websites hosing the material. And to make matters more complicated, “deepfakes” technologies make it relatively easy to depict just about anyone in intimate or sexually explicit material for which they were never actually photographed.

Citron Proposes Privacy Injunctions

In a new paper that addresses the nonconsensual distribution of both real and manufactured images, Citron proposes two interdependent legal mechanisms to overcome the hurdles to removing this kind of content from the internet, and she also discusses the First Amendment questions raised as both a constitutional and a cultural matter.

First, Citron argues that courts must be empowered with “clear legislative permission” to provide plaintiffs with injunctive relief by ordering sites “to remove, delete, or otherwise make unavailable intimate images, real or fake, hosted without written permission.” One might think this is common sense, or simply a matter of basic decency, but court orders to remove material of any kind have been assiduously opposed by internet platforms large and small, and with considerable legal and PR support from “digital rights” activists like the Electronic Frontier Foundation. (See post here and here about Google v. Equustek & Hassell v. Bird.)

The rationale usually argued in the blogosphere and the courts for refusal to remove any content is the First Amendment—a fallacy that now roils the public debate—but the legal foundation that has given the platforms the swagger to distort the speech and press rights has been the courts’ over-broad interpretation of Section 230 of the Communications Decency Act as a blanket immunity. Not only have platforms been shielded against being named parties to civil litigation, but 230 has been invoked as the reason to shield them even from injunctions that do nothing more than order the removal of harmful material. Naturally, when a web company cannot be held liable for anything, it’s very easy for its operators to call all content “speech” and tell the public that all platforms are inherently engines of free expression.

Thus, in order for the above-mentioned legislative permission to be effective, Citron argues, as she and Franks have in earlier papers, that, “Congress should amend Section 230 to make clear that platforms and search engines can be sued in cases seeking injunctive relief and attorney’s fees related to the removal of intimate images hosted without written consent.”

Citron acknowledges that the solution is not perfect, particularly because litigation directed at one incident on one platform does not address the likelihood that intimate images will be distributed across multiple sites; but she writes, “Victims need to know that society recognizes the damage to the dignity and intimate privacy of victims, that law can help mitigate the damage, that sites are not law-free zones, and that lawyers will represent them.”

If that sounds like Citron’s proposed remedies are more symbolic than remedial, I will echo her comparison to civil rights legislation and argue that we should not underestimate even the symbolism of law to effect widespread remedies by fostering cultural and behavioral change. Presumably, most people do believe the act of distributing intimate images without permission is wrong, whether for revenge or any other motive. So, it helps when the law says it’s wrong, too. But at the same time, Citron addresses a broader cultural phenomenon in which Americans in particular struggle with our brand of the speech right and the distinction between access to information and prurient curiosity.

As a constitutional question, when a law intersects rights like those enumerated in the First Amendment, it must be held to the standard known as strict scrutiny. This means that a statute must serve a compelling public interest and must achieve a narrow purpose that cannot be achieved through less restrictive means. Here, Citron notes that the state laws criminalizing the nonconsensual distribution of intimate images have already held up to constitutional challenges in Vermont, Illinois, and Minnesota, but she also discusses that gray area where the public’s right to know is often too easily conflated with general interest.”By my lights, there can be a vast difference between learning about a public official’s intimate information and seeing photographs or videos documenting it. That distinction is worth careful consideration,” Citron writes.

Agreed. Specifically, did the American public have a right to know that Rep. Katie Hill was intimately involved with a member of her staff and, allegedly, using marijuana? Yes. Even though I personally do not care much what an elected official does in her private life unless it directly intersects with the official role, those allegations are certainly news that voters have a right to know. But I agree with Citron that there is a moral line—I would say a chasm—between a news report about Hill’s conduct and the publication of her intimate images (albeit semi-redacted) on the site RedState.

Hill sued RedState owner Salem Media,[2] and the publisher was granted a motion to dismiss the complaint under California’s anti-SLAPP law,[3] with the court finding, in Citron’s words, that “the photos shed light on Hill’s fitness for office.” The hell they did. How the information about Hill’s conduct sheds light on her fitness for office is up to the voters, but the leaked photos were nothing more than RedState’s opportunity to earn revenue by pandering to the worst impulses of the electorate, which increasingly cannot distinguish between political discourse and tribal brutality. RedState’s publication of the photos is barely distinguishable from revenge porn disguised as political reportage.[4] And to add insult to injury, Hill had to pay $200,000 for Salem’s legal fees.

As Citron notes, “Most cases involving the nonconsensual disclosure of intimate images will not present close calls about the boundaries of the public’s legitimate interest.” And, of course, this is correct. Most individuals who engage in this kind of behavior are not even propaganda mongers, let alone journalists. But I do suspect the techbro culture of the internet, where perhaps the blurry lines we see on a RedState re. Hill or a Gawker re. Hulk Hogan, imply to those other bros who violate intimate privacy that what they are doing is not criminal. It is. And it is time for the laws to catch up to that reality.


[1] Renewing Activism to End Violence Against Women www.rekjavikdialogue.is

[2] Hill’s counsel is Carrie Goldberg, leading specialist in this area.

[3] Strategic Lawsuit Against Public Participation.

[4] To be clear, I would say the same thing about the publication of similar photos of Reps. Boebert or Greene for whom I have nothing but contempt.

Can We Hope to Sensibly Reform Section 230?

In a paper published in 2020, [1] scholars Danielle Keats Citron and Mary Anne Franks advocate a relatively modest and elegant approach to amending Section 230 of the Communications Decency Act of 1996—changes that would directly help the statute’s unintended victims—but it is difficult to imagine how any nuanced consideration of the 230 issue will make headway in the current political climate.

At one extreme, the Former Republican Party (FRP) has amped up “Repeal 230” into a buzzy talking point with no practical or legal merit whatsoever; while shouting from the other side of the vortex is the internet industry and its network of supposedly progressive groups, who insist that the status quo of 230 is the keystone in the entire internet ecosystem. One behavior these seemingly opposite forces have in common is that both have exploited the misconception that Section 230 has something to do with viewpoint neutrality. It does not. Neither by the letter nor the spirit of the law.

To recap, there are two main parts to Section 230 under the “Good Samaritan” clause. The first states that online service providers will not be considered “publishers” of material provided by other parties. So, whether you or I or the NYT posts something on Facebook that is potentially harmful, and also unprotected speech (e.g. defamation), Facebook is shielded from potential liability resulting from that material. The second part states that when a platform engages in moderation and removes “objectionable material,” this does not render the platform a potentially liable “publisher” either. And it does not matter whether “objectionable material” comprises illegal content (e.g. child porn) or simply material the platform proscribes according to its own terms of service.

Nothing in the 230 statute states, or even implies, that service providers are limited by the speech right—indeed, as private entities, it is their First Amendment right to moderate as they wish—or that they are obligated to maintain viewpoint neutrality as a condition of the liability shield. That said, it was the platform operators themselves who promoted the false narrative that social media sites are the shiny new “engines of speech” right up until 2016, when “objectionable material” (mostly in the form of dangerous misinformation) steadily became the largest plank in the platform of what used to be the Republican party. Meanwhile, the real victims of Section 230’s unintended consequences may continue to be ignored amid the storm of insanity encircling this one fragment of cyber law.

Simply put, Section 230 is the reason why online platforms may not be held liable when their operators host, or even encourage and monetize, any of the following:  nonconsensual pornography, child sexual abuse material (CSAM), libel and defamation, hazardous misinformation, organized hate groups, harassment, or incitements of violence. And while vested interests play rhetorical games with the allegedly blurry lines between speech and any of that material, Citron and Franks first advocate clarifying that ambiguity by striking the word information from part one of the statue and replacing it with the word speech.  “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” states their paper.

Unlike “information,” protected “speech” has a legal definition rooted in case law, and at least some of the aforementioned categories of material would never qualify as speech under legal scrutiny, while others (e.g. hate speech) would be subject to review on a case-by-case basis. Perhaps most importantly, what this single word change likely accomplishes for, say, victims of harassment, is that it would more frequently induce a platform to remove harmful material, either voluntarily or by court order, rather than choose to litigate to try to prove that the harmful content is protected speech. As things stand, almost everything online is presumed to be speech. So, if a party uses any intermediary, from Twitter to a dating app, to cause even severe harm to another party, the intermediary is under no obligation to provide relief by removing the content. And most courts have held that 230 supports this position.

Under this one-word revision, if a platform knowingly continues to host allegedly actionable material, the platform voids its presumption of immunity, which does not mean it is necessarily liable for any harm. A complainant still bears the burden to prove the merits of a complaint just like any other case, but the platform would not automatically be indemnified at the summary judgment phase of a case. Meanwhile, the only form of relief many complainants ever want is removal of the harmful content, and not necessarily a damage award from a platform that otherwise does the right thing.

In that regard, if a platform unknowingly hosts potentially actionable content, as almost any platform inevitably does, Citron and Franks advocate another modification to 230, requiring that a platform demonstrate that it maintains a “reasonable,” ongoing practice of removing objectionable material upon notice or independent discovery of the problematic content. [2] “If adopted,” their paper states, “the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices in the face of unlawful activity that manifestly causes harm to individuals.”

This reasonableness standard would presumably accomplish two things:  first, it would provide the many platforms operating in good faith with the kind of liability protection intended by Section 230; and second, it immediately voids the liability shield for those platforms that intentionally operate as Bad Samaritans. Sites that purposely trade in libel and defamation, nonconsensual pornography, harassment (and quite possibly hate-speech and incitements to violence) would no longer be able to duck behind the Vibranium shield they have been wielding to avoid being named parties in a litigation. In many cases, this requirement to demonstrate a “reasonable” moderation policy would probably obliterate the business models for sites that intentionally profit from the misery of others, and I fail to see a downside in that outcome.

Of course, amending 230 requires an act of Congress, and there’s the rub. Not only will Silicon Valley throw its considerable resources at campaigns to leave the statute untouched until doomsday, but step one proposed by Citron and Franks—replacing information with speech—runs head-first into the existential crisis we currently face as a nation. Political speech is paradigmatically protected speech, arguably the most sacred of all forms of protected speech. But at present, one party has decided that its political speech shall embrace an insurrection of lies, outlandish conspiracy theory, and even violence against the very foundation on which the speech right itself is written. Whether we survive that paradox is a much bigger question than internet governance, but for the everyday victims of Section 230, it would be grand if we could address what is legitimately wrong with this law.

[1] “The Internet as Speech Machine and Other Myths Confounding Section 230 Reform,” University of Chicago Legal Forum (12/01/2021). https://legal-forum.uchicago.edu/publication/internet-speech-machine-and-other-myths-confounding-section-230-reform

[2] As the paper states, this proposal originates with Citron and colleague Benjamin Wittes.


Vortex image by: sondem