In October, the Supreme Court granted cert in two cases that may limit the immunity granted to internet platforms under Section 230 of the Communications Decency Act. Both Gonzalez v. Google and Twitter v. Tamneh, arise from plaintiffs seeking to hold platforms accountable for “targeted recommendations” of material associated with acts of international terrorism, but in this post, I will only focus on the former case. Here’s a slightly truncated background as stated in the Gonzalez petition:
In November 2015 Nohemi Gonzalez, a 23-year-old U.S. citizen studying in Paris, France, was murdered when three ISIS terrorists fired into a crowd of diners at La Belle Equipe bistro. . . . Several of Ms. Gonzalez’s relatives, as well as her estate, subsequently brought this action against Google, . . . The plaintiffs alleged that Google, through YouTube, had provided material assistance to, and had aided and abetted, ISIS, conduct forbidden and made actionable by the AntiTerrorism Act.
Doubtless, the particulars of this cases raise complex questions of liability that even many critics of 230’s too-broadly applied immunity may have difficulty defending on all merits. Google’s response, for instance, states that the “ATA claims in this case have produced a procedural morass.” Nevertheless, the Court agreed to review, having denied all Section 230 petitions in the past, leaving some to note that Justice Thomas signaled a strong interest in Section 230 immunity in a brief respecting denial of certiorari in the 2020 case Malwarebytes v. Enigma Software Group. There, Thomas wrote:
Adopting the too-common practice of reading extra immunity into statutes where it does not belong, courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms. . . . Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.
I will leave it to others to discuss whether Gonzalez is the right vehicle to address the most chronic harms caused by overbroad readings of 230—or to speculate exactly what this Supreme Court is looking to achieve in light of the politicized narratives and misstatements that have attached to public discussion about the statute.
Until the Trump administration turned the White House into the Ministry of Misinformation, Section 230 was not mainstream news, and one consequence of those events is that the provision has been misrepresented as a content (i.e., political) neutrality law, which it is not. Though, as discussed in the post linked above, the neutrality rhetoric is a misconception Big Tech promoted itself years before Members of Congress started alleging “anti-conservative” bias and conflating that talking point with threats to abolish Section 230.
But I wanted to focus on the narrow question presented in the Gonzalez petition, which is whether “targeted recommendations” made by interactive computer services are properly immunized. Because whatever the outcome of this case—and if there is any chance that Congress might effectively amend Section 230—both the Court and lawmakers should reject the too-friendly term “recommendation” to describe how algorithms on major platforms are designed to attract and retain user attention.
It is now a matter of record that algorithms trained to adapt to user behavior and feed what may be our worst instincts is an often-toxic phenomenon that is not adequately described by the word “recommendation.” Interaction between the social platform and the human user is not comparable to reading a book review or hearing a friend’s suggestion to see a show or even having Netflix indicate that if you liked movie A, you might like movie B. These positive social transactions are analogized by Big Tech to describe its systems and models in the same way the industry invokes other socially constructive words like “share,” “connect,” and “democratize” while papering over hazards like IP theft, harassment, and the wildfire spread of misinformation.
Google’s Response Begs for Scrutiny
Notably, in Google’s response asking the Court to deny cert in Gonzalez, it practically admits to the insidious nature of algorithmic “recommendation” when it emphasizes the fact that the courts have held that search engines are protected by 230—and that search is comparable to “recommendation.” Here, Google inadvertently highlights the reason search sucks now—because rather than return results based on a reasonably objective definition of “relevance,” the Google search algorithm has been tweaked to return results “of likely interest” to the user based on what Google has learned about them.
I doubt I am alone in finding that search results are consistently less useful than they were just a few years ago—even to the extent that the most logical result (e.g., an entity’s website) appears on page two or three, where it used to at least be the first or second item below the top three paid placements. But on a darker note, Google’s brief practically acknowledges that if the user is an anti-vaxxer or an election denier or believer in some other conspiracy nonsense, they will be served search results likely to reinforce those false narratives. Whatever we want to call this phenomenon and its psychological influence, it is too quaint by some margin to call it “recommendation.”
But even if Google Search still functions in a way that is properly immunized by Section 230 (and I would question that as the technology changes), we confront a whole other level of insidious power to influence with the combination of Google or Facebook’s algorithms and the capacity of video to tap into emotions—especially strong emotions like anger and fear. The notion that the fundamental design of YouTube does not foster a symbiotic relationship between the potential terrorist and the recruiting video is barely plausible. But for sure, it is a phenomenon Congress did not consider in 1996 when it adopted Section 230.
Argus is Allegedly Blind
When it comes to marketing, Google et al. boast the capacity to know what a user is going to buy, how she’ll vote, or what she’ll order for dinner—even minutes before she knows these things herself. But when the conversation turns to liability, these same companies suddenly cannot know much of anything. While Google is probably correct that there are several complicating aspects in the Gonzalez complaint, it also downplays the efficacy of a platform like YouTube to convert latent emotions into dangerous action.
Whether that action is joining ISIS and murdering tourists or joining a mob and attacking the U.S. Capitol or breaking into the home of the Speaker and attacking her spouse, I think we have sufficient evidence to conclude that insane narratives are running amok and driving people to extreme behavior with deadly consequences. Google et al. may not bear direct responsibility for these events—surely, terrorism existed long before the internet—but neither are these platforms mere hapless conduits incidentally fueling the fire. And again, Google almost acknowledges this in its reply brief.
“…since the 2015 Paris attack, YouTube has overhauled its terrorism policies, as one of petitioners’ sources recognizes,” the brief states. Oddly, Google cites a WSJ story which reports that despite changes by the platform, YouTube still “Drives People to the Internet’s Darkest Corners.” More acutely, if YouTube attempted to change its algorithm and/or its policies in response to the Paris attacks, this suggests that a nexus does exist between platform “recommendation” and videos that are likely to motivate violent action. This level of interaction between user and machine, which serves the platform’s interest more than it does the public interest, was neither envisioned nor discussed at the time 230 was adopted.
Circa 1996, the analogies were limited to human publishers who make decisions about what to disseminate, cut, or edit. But those points of reference are woefully incomplete for understanding contemporary data mining and the manner in which algorithms produce real-world events. Thirty years ago, we were talking about this stuff with the expectation that the network might recognize that you’re in the market for a toaster and will show you some ads for toasters. But when toaster shopping feeds an advanced algorithm capable of intuiting that you might be interested in all the videos that will “prove” how the Jews are running the world or that Yoga is Satan worship, that is a very different creature than a “recommendation” machine.
So, as the Court considers whether “targeted recommendations” are properly immunized by Section 230, we should hope that it recognizes how tepid that term is for describing the state of the technology, which behaves nothing like Congress’s understanding nearly thirty years ago. Whatever the proper term should be, it is implausible that Congress intended to provide blanket immunity for a business model that, even occasionally, fuels riots, terrorism, harassment, nonconsensual pornography, rampant misinformation, and even genocide. Surely, these cannot be acceptable byproducts of the most ambitious or prosaic uses of the internet.
Leave a Reply