AI, Search, & Section 230

On May 18, the Supreme Court delivered opinions in Gonzalez v. Google and Twitter v. Taamneh, a pair of interrelated cases in which both plaintiffs sought to hold online platforms liable for hosting material meant to inspire acts of terrorism. Because the Court unanimously found in Taamneh that there was no basis in anti-terrorism law for liability (and therefore no claim for relief), it then declined to address the Section 230 question in Gonzalez, which was whether Google’s “recommendation algorithm” is sufficient to find contributory liability for the inciteful material being recommended.

Properly read, Section 230 shields OSPs from “publisher liability” but not from “distributor liability.” A distributor of allegedly harmful material may be liable when it knows, or has reason to know, the nature of the material and either affirmatively chooses to distribute it or willfully turns a blind eye to the potential harm and does nothing to stop it. Unfortunately, ever since 230 became law in 1996, the courts have generally read the law as a blanket shield for any OSP distributing any kind of material as long as it was uploaded by a user of the site and not by the site operators.

Plaintiff Gonzalez alleged that Google’s “recommendation” algorithm, designed to promote content based on the system’s interpretations of user behavior, played a crucial role in pushing ISIS propaganda toward the parties who eventually committed a mass shooting in Paris that resulted in the death of Nohemi Gonzalez. Plaintiffs argued that “targeted recommendations” are not properly shielded by Section 230, and to the extent one can read the tea leaves in oral arguments, justices as opposite as Thomas and Brown-Jackson may be sympathetic to this view.

For further reading in “Strange Bedfellows,” the amicus brief in Gonzalez filed by Senator Hawley echoes many of the same legal arguments in the brief filed by Cyber Civil Rights Initiative. Also, Senators Hawley and Blumenthal are at least publicly in synch on the need to correct the errors in Section 230. “Reform is coming,” Sen. Blumenthal declared in March. All of which is to say that there appears to be both bipartisan and multi-stakeholder consensus building around the idea that platforms can and should be held accountable for promoting harmful material.

Does AI-Enhanced Search Imply Liability?

Notably, one prong of Google’s defense in Gonzalez was that “recommendation” is analogous to search and that delivering search results cannot rise to the level of contributory liability. Whether the Court would agree with this comparison under full examination in a viable case remains an open question. But assuming the Court would not have sided with Google, what might it make of Google’s new Search Generative Experience (SGE)? Still in trial phase for users who choose to enable it, the AI-driven SGE could be the new mode of search, or (if it totally sucks) could tank Google’s core business. As James Vincent writes for The Verge:

… it’s the dynamics of AI — producing cheap content based on others’ work — that is underwriting this change, and if Google goes ahead with its current AI search experience, the effects would be difficult to predict. Potentially, it would damage whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis. Sites could protect themselves by locking down entry and charging for access, but this would also be a huge reordering of the web’s economy. In the end, Google might kill the ecosystem that created its value, or change it so irrevocably that its own existence is threatened. 

Hard to predict for sure, and I will not make the attempt. There are, of course, many potential hazards with AI-enhanced search, not the least being more virulent mutations of garbage results (as if misinformation needs any help). But in a Section 230 context, would the deployment of SGE as Google’s new search model increase the likelihood of its liability under the same legal arguments presented in Gonzalez? The “recommendation” algorithm is a form of AI, and if that level of platform influence could be sufficient to find liability, then presumably a more robust use of AI could result in a stronger allegation of liability.

On June 14, Senators Hawley and Blumenthal introduced a two-page bill that would make Section 230 immunity unavailable for service providers “if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’ Presumably, this bill can be seen as performative along with other announcements from Congress that AI has their attention, with various Members promising not to be fooled again into allowing Big Tech to regulate itself. There’s a lot of “We’re on it” messaging coming from the Hill about AI, and we’ll see what comes.

In the meantime, perhaps there is something to the Hawley bill in light of the considerations in Gonzalez and the imminent release of SGE. At first, I sneered at the amendment because generative AI is primarily a tool of production, and Section 230 immunity has little or nothing to do with production. It doesn’t matter whether the harmful material at issue is produced with Midjourney or a box of crayons. But if a generative AI serves as the engine for a new mode of search (i.e., recommendation), then the language in the Hawley/Blumenthal amendment would seem to obviate the need to litigate the question presented in Gonzalez. Congress would be declaring that Google is not automatically shielded from liability.

Considering that we are far from resolving the damage done by the “democratization of information,” it’s tough to feel sanguine about the prospect of AI making search better rather than suck faster. On the other hand, if the adoption of AI in certain core functions of online platforms is a basis for Congress resetting the terms of liability, then perhaps service providers will discover a renewed interest in the original intent of Section 230—an incentive to remove harmful material, not to keep it online and monetize it.


Photo source by: sinenkiy

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)