AI, Search, & Section 230

On May 18, the Supreme Court delivered opinions in Gonzalez v. Google and Twitter v. Taamneh, a pair of interrelated cases in which both plaintiffs sought to hold online platforms liable for hosting material meant to inspire acts of terrorism. Because the Court unanimously found in Taamneh that there was no basis in anti-terrorism law for liability (and therefore no claim for relief), it then declined to address the Section 230 question in Gonzalez, which was whether Google’s “recommendation algorithm” is sufficient to find contributory liability for the inciteful material being recommended.

Properly read, Section 230 shields OSPs from “publisher liability” but not from “distributor liability.” A distributor of allegedly harmful material may be liable when it knows, or has reason to know, the nature of the material and either affirmatively chooses to distribute it or willfully turns a blind eye to the potential harm and does nothing to stop it. Unfortunately, ever since 230 became law in 1996, the courts have generally read the law as a blanket shield for any OSP distributing any kind of material as long as it was uploaded by a user of the site and not by the site operators.

Plaintiff Gonzalez alleged that Google’s “recommendation” algorithm, designed to promote content based on the system’s interpretations of user behavior, played a crucial role in pushing ISIS propaganda toward the parties who eventually committed a mass shooting in Paris that resulted in the death of Nohemi Gonzalez. Plaintiffs argued that “targeted recommendations” are not properly shielded by Section 230, and to the extent one can read the tea leaves in oral arguments, justices as opposite as Thomas and Brown-Jackson may be sympathetic to this view.

For further reading in “Strange Bedfellows,” the amicus brief in Gonzalez filed by Senator Hawley echoes many of the same legal arguments in the brief filed by Cyber Civil Rights Initiative. Also, Senators Hawley and Blumenthal are at least publicly in synch on the need to correct the errors in Section 230. “Reform is coming,” Sen. Blumenthal declared in March. All of which is to say that there appears to be both bipartisan and multi-stakeholder consensus building around the idea that platforms can and should be held accountable for promoting harmful material.

Does AI-Enhanced Search Imply Liability?

Notably, one prong of Google’s defense in Gonzalez was that “recommendation” is analogous to search and that delivering search results cannot rise to the level of contributory liability. Whether the Court would agree with this comparison under full examination in a viable case remains an open question. But assuming the Court would not have sided with Google, what might it make of Google’s new Search Generative Experience (SGE)? Still in trial phase for users who choose to enable it, the AI-driven SGE could be the new mode of search, or (if it totally sucks) could tank Google’s core business. As James Vincent writes for The Verge:

… it’s the dynamics of AI — producing cheap content based on others’ work — that is underwriting this change, and if Google goes ahead with its current AI search experience, the effects would be difficult to predict. Potentially, it would damage whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis. Sites could protect themselves by locking down entry and charging for access, but this would also be a huge reordering of the web’s economy. In the end, Google might kill the ecosystem that created its value, or change it so irrevocably that its own existence is threatened. 

Hard to predict for sure, and I will not make the attempt. There are, of course, many potential hazards with AI-enhanced search, not the least being more virulent mutations of garbage results (as if misinformation needs any help). But in a Section 230 context, would the deployment of SGE as Google’s new search model increase the likelihood of its liability under the same legal arguments presented in Gonzalez? The “recommendation” algorithm is a form of AI, and if that level of platform influence could be sufficient to find liability, then presumably a more robust use of AI could result in a stronger allegation of liability.

On June 14, Senators Hawley and Blumenthal introduced a two-page bill that would make Section 230 immunity unavailable for service providers “if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’ Presumably, this bill can be seen as performative along with other announcements from Congress that AI has their attention, with various Members promising not to be fooled again into allowing Big Tech to regulate itself. There’s a lot of “We’re on it” messaging coming from the Hill about AI, and we’ll see what comes.

In the meantime, perhaps there is something to the Hawley bill in light of the considerations in Gonzalez and the imminent release of SGE. At first, I sneered at the amendment because generative AI is primarily a tool of production, and Section 230 immunity has little or nothing to do with production. It doesn’t matter whether the harmful material at issue is produced with Midjourney or a box of crayons. But if a generative AI serves as the engine for a new mode of search (i.e., recommendation), then the language in the Hawley/Blumenthal amendment would seem to obviate the need to litigate the question presented in Gonzalez. Congress would be declaring that Google is not automatically shielded from liability.

Considering that we are far from resolving the damage done by the “democratization of information,” it’s tough to feel sanguine about the prospect of AI making search better rather than suck faster. On the other hand, if the adoption of AI in certain core functions of online platforms is a basis for Congress resetting the terms of liability, then perhaps service providers will discover a renewed interest in the original intent of Section 230—an incentive to remove harmful material, not to keep it online and monetize it.


Photo source by: sinenkiy

Sen. Cruz Brief Wrongly Portrays Section 230 as a Neutrality Law

Among the briefs filed in Gonzalez v. Google asking the Supreme Court to properly read Section 230 of the Communications Decency Act is one filed by Sen. Ted Cruz, Rep. Mike Johnson, and fifteen other Republican Members of Congress. Presenting similar textual arguments as the brief filed by Cyber Civil Rights Initiative (CCRI), highlighted here in a recent post, Sen. Cruz et al. petition the Court to address a matter that has nothing to do with Section 230—a politically motivated complaint summarized as follows:

Confident in their ability to dodge liability, platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions in the text of §230.

Allegations of viewpoint bias are inaptly raised in Gonzalez, or indeed any case addressing Section 230. Even if it could be shown that a social platform actively engages in true political bias (i.e., moderating ideas and speakers rather than extremism), this is not a Section 230 issue because neither political nor any other form of bias necessarily implicates civil liability for an online platform any more than it does for a newspaper or TV network.

The First Amendment protects bias, and Section 230 does not alter this fact. Hence, the Cruz brief strays far from the purpose of the Court’s review in Gonzalez by erroneously implying that bias is inherently grounds for litigation when it alleges that the overbroad interpretation of Section 230 immunity causes or sustains politically motivated censorship. But Section 230 is not and never has been a viewpoint neutrality law. Cruz et al. are asking the Court for a misreading that has lived in the PR of the platforms and the rhetoric of tech-utopianism, but is nowhere in the statute.

Specifically, the Cruz brief alleges that the platforms have been shielded in censorious conduct by a poor statutory reading of their right to “good faith” removal of material that is “otherwise objectionable.” The amici argue that those words must be read in balance with the preceding words in the statute providing immunity where platforms remove or restrict access to third-party content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The brief then asserts (a bit wryly) that “…conservative viewpoints on social and political matters do not rise to the level of being ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”

Notably, the brief both elides a definition of “conservative” in context to its argument and asks the Court to read Section 230 as a mandate that platforms leave all material online that does not meet a very narrow, statutory definition of “objectionable.” This is false. Section 230 was written to encourage platforms to adopt and enforce their own community standards (i.e., decide what is objectionable), which does not disturb the general right to host a platform which may be politically biased in any direction. A proper reading of 230 simply means that platforms shall not be unconditionally immunized against potential liability for hosting content that results in some form of harm which may be remedied through civil litigation.

The Cruz brief does not distinguish between amici’s political bona fides and the broad spectrum of hate-speech and violence-inciting material that some Americans now call “conservative,” and which is indeed problematic for platforms. For instance, as the Alex Jones verdicts or the January 6th convictions make clear, material that certain people are willing to label “conservative” may, as a matter of law, be libel, defamation, or disinformation that results in individuals being harassed and threatened, or which leads to violence—or even an insurrection. And it is precisely in this context (i.e., blurring the line between political views and actionable conduct) that the complaint in the Cruz brief is so inaptly raised in Gonzalez.

Petitioner Gonzalez alleges that Google’s “recommendation” algorithms contributed to fostering terrorist activity by promoting ISIS recruiting videos in a manner that predictably roused a latent terrorist, who then acted on those emotional triggers. Regardless of whether that complaint prevails in context to the anti-terrorism statutes at issue, the general allegation about the platform’s role in Gonzalez is indistinguishable from an algorithm detecting that a user likes InfoWars and, therefore, “recommends” QAnon videos or some other tinfoil-hat material with the foreseeable result that some domestic terrorist will assault a family in Sandy Hook, ransack the Capitol, conspire to kidnap a sitting governor, etc.

Thus, if the Court agrees that Google is not shielded from litigation in Gonzalez, the allegations of liability presented, even if they do not prevail in that instance, are directly analogous to an Alex Jones or an election-denier issue for a platform moderation team. Even if Google is ultimately not found to be liable for the ISIS-related killing of Nohemi Gonzalez, allowing the case to proceed past the Section 230 veil will demonstrate that there is a plausible, common-sense nexus between amplification of certain material and harmful conduct.

Under a correct reading of 230 (i.e., no unconditional immunity for platforms), the platforms may be more effective in addressing inciting material—a goal that should have bipartisan support from lawmakers interested in both a proper reading of the statute and the general welfare of the nation. Unfortunately, this political monkey wrench in the Section 230 issue is part of a broader narrative in which social platforms have allegedly tilted the scales—but in favor of extreme right-wing material calling itself “conservative.” For instance, in February 2021, BuzzFeed reported:

Internal documents obtained by BuzzFeed News and interviews with 14 current and former employees show how the company’s policy team — guided by [Republican lobbyist and conspiracy promoter] Joel Kaplan, the vice president of global public policy, and Zuckerberg’s whims — has exerted outsize influence while obstructing content moderation decisions, stymieing product rollouts, and intervening on behalf of popular conservative figures who have violated Facebook’s rules.

This and other reports, including testimony before Congress, reveal a pattern of (if anything) pro right-wing bias at Facebook and other platforms, including evidence that the “anti-conservative” story itself is a fiction promoted by individuals like Kaplan. More specifically, Zuckerberg’s apparent resistance to remove Alex Jones from the platform demonstrates how the chronic misreading of Section 230 would only benefit a Trumpianized GOP that embraces every extremist willing to wear a red hat.

A correct reading of 230 opens the possibility that Facebook could be liable for hosting or “recommending” InfoWars, while an incorrect reading forecloses that possibility at summary judgment. Only one of these interpretations benefits those elements of the GOP who choose to align themselves more closely with that brand of “conservatism.” Thus, consistent with the Trumpian tactic of weaponizing alleged victimhood, the comparatively mild complaint of viewpoint bias in the Cruz brief is political theater—blaming social media platforms for actions that a) they have not taken; b) they have a constitutional right to take, if they want to; c) are unrelated to Section 230 immunity; and d) detract from an important legal question for real victims barred from pursuing relief by misreading the statute.

Cruz and his fellow amici have heard or read testimony from witnesses like whistleblower Frances Haugen, who explained to the Senate Commerce Committee how Facebook consistently put profits ahead of safety, adding, “The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people.” Specifically, Haugen and other former insiders have repeated the theme that extremism has been good for social platforms—that angry users are active users, and active users translates to profit for these companies.

A proper reading of Section 230 will not solve every problem fostered by social platforms, but it can have the effect of forcing platform operators to identify when speech is reasonably linked to harmful conduct and to acknowledge a nexus between addictive algorithm design and illegal activity—from terrorism to “revenge porn.” Very real harms have been shielded and exacerbated by misreading Section 230, and it is this error of law which the Court should resolve. In the process, it should decline to address the subject of viewpoint neutrality as the inappropriate, political side show it is.

Cyber Civil Rights Initiative Files Common Sense Brief in Major Section 230 Case

In my recent post about Gonzalez v. Google—the Section 230 case granted cert by the Supreme Court—I expressed the view that the word “recommendation” is too charming to describe the interaction between social media algorithms and many users’ experiences. Systems capable of reinforcing suicidal ideations in a teenager or stoking violent instincts in a potential terrorist cannot sensibly be described as “recommending” the kind of content associated with these and other dangerous outcomes. And although petitioner Gonzalez specifically asks the Court to decide whether “algorithmic recommendation” is shielded from liability under Section 230 of the Communications Decency Act, the amicus brief filed by the Cyber Civil Rights Initiative (CCRI) and Legal Scholars asks the Court for a more nuanced reading of the question. From the brief…

Amici emphasize that this case cannot be correctly decided by focusing on “traditional editorial functions” or by trying to craft a general rule about whether “targeted algorithms” fall within Section 230’s immunity provision…. To categorically deny immunity to an ICSP for using targeted algorithms would directly contradict Section 230(c)(2) and finds no support in Section 230(c)(1). Such an interpretation would also have a devastating impact on the victims of online abuse by dissuading Good Samaritan ICSPs from using targeted algorithms to remove, restrict, or otherwise reduce the accessibility of harmful material, including nonconsensual pornography.

CCRI, which works to address and remedy various forms of harassment and civil rights abuses committed via interactive computer service providers (ICSPs), asks the Court to restore the textually coherent and common-sense meaning of Section 230, which was written to encourage service providers to mitigate harmful material—not to unconditionally immunize them from liability for hosting it. For almost twenty years, lower courts have consistently misinterpreted the purpose of 230 to provide automatic immunity just so long as the material at issue is posted by someone other than the platform owner/managers.

This chronic misreading of Section 230 results in two significant problems: 1) dismissal at the summary judgment stage of any claim in which an ICSP may be liable; and 2) failures to provide injunctive relief where the ICSP is not liable but may be ordered to remove material which the court agrees is causing harm to a complainant. As things stand, a site that intentionally trades in harmful material is immunized, and so is a site that unintentionally hosts harmful material but elects not to remove the material for its own reasons. The rationales vary as to why “neutral” platform operators often refuse to remove material alleged, or even proven, to be harmful, but for too long, the industry has echoed the absurd premise that removing anything from a social platform is incompatible with “a free and open internet.”

Section 230 Is (Was) Not Novel Legislative Territory

The CCRI brief is so firmly grounded in the legislative history of Section 230 that it is difficult to fathom how any court—let alone many courts—strayed so far, and for so long, from a plain-text reading of the statute. In describing the common-law (i.e., not groundbreaking) underpinnings of Section 230, for instance, CCRI cites the distinction between a “publisher” and a “distributor” of defamatory material thus:

… “[d]efamation at common law distinguished between publisher and distributor liability.” While a publisher was strictly liable for carrying defamatory matter, a distributor who only “delivers or transmits defamatory matter published by a third person is subject to liability if, but only if, he knows or has reason to know of its defamatory character.” [Emphasis added.]

This is common sense well founded in law. If an individual or a business has knowledge that he/it is facilitating harm caused by a separate, directly liable party, that facilitation may rise to a secondary civil or criminal liability. The newsstand operator is not liable for inadvertently selling adult magazines containing underage models, but if he knows about it, he is probably—and deservedly—in big trouble.

This basic principle of secondary liability applies everywhere except for internet platforms—and only because the courts have so thoroughly misconstrued Section 230 by conflating two sub-sections of the statute, which are meant to be read independently. As the CCRI brief explains, 230(c)(1) states that merely providing access to third-party content (e.g., YouTube hosting a video uploaded by a user) does not make the ICSP a “publisher” or “speaker.” Then, 230(c)(2) states that voluntarily making a good-faith effort to remove objectionable material does not make the ICSP generally liable as a “publisher” of everything it hosts.

“Cases reading Section 230 to have a broader preemptive effect than provided for in (c)(1) and (c)(2) have departed from the statutory text,” states the CCRI brief. It emphasizes the fact that “distributor liability” is envisioned by Section 230(c)(1) where the ICSP has knowledge of the harmful material, and it argues that the function of Section 230(c)(2) is legislatively “parallel” to state Good Samaritan laws written to immunize ordinary citizens against unreasonable liability when we make good-faith efforts to help someone in need of assistance. Prior to these laws, an individual intending to render aid to a stranger could be held liable for inadvertently causing harm, but as the CCRI brief states:

… like state Good Samaritan statutes, Section 230(c)(2) includes important limits to the immunity it provides. First, it does not apply when an ICSP is already under an existing duty to act—i.e., where its action to restrict access to objectionable third-party content is not “voluntary.” Nor does it immunize ICSPs that do nothing to address harm or that contribute to or profit from harm.

Again, this is just common sense grounded in common law that applies everywhere except the internet. If one does not initiate illegal activity but seeks to benefit from that activity, one may be liable for the harm caused. It is inconceivable that Congress ever intended to exempt the multi-billion-dollar internet industry from this longstanding principle. And that’s because it intended no such thing.

It will be interesting to see what amici who file on behalf of Google will argue in this case. Other than the usual panegyrics to the internet, I am curious to see whether, for instance, the EFF will have anything coherent to say in defense of two decades’ worth of textual misreading. Typically, defenders of the status quo reading of Section 230 write about threats to “the internet” as if a lack of immunity automatically results in a finding of liability and damages. But on the contrary, a proper reading of the law simply means that an ICSP cannot so easily dismiss every claim and that the injured party is allowed her day in court to prove whether a platform had, or has, a duty to act. Litigating against tech giants is hardly a fair fight in the first place, and ICSPs neither need nor deserve an unconditional immunity that exists nowhere else in the justice system.