In Gonzalez v. Google, SCOTUS Should Look Beyond the Term “Recommendations”

In October, the Supreme Court granted cert in two cases that may limit the immunity granted to internet platforms under Section 230 of the Communications Decency Act. Both Gonzalez v. Google and Twitter v. Tamneh, arise from plaintiffs seeking to hold platforms accountable for “targeted recommendations” of material associated with acts of international terrorism, but in this post, I will only focus on the former case. Here’s a slightly truncated background as stated in the Gonzalez petition:

In November 2015 Nohemi Gonzalez, a 23-year-old U.S. citizen studying in Paris, France, was murdered when three ISIS terrorists fired into a crowd of diners at La Belle Equipe bistro. . . . Several of Ms. Gonzalez’s relatives, as well as her estate, subsequently brought this action against Google, . . . The plaintiffs alleged that Google, through YouTube, had provided material assistance to, and had aided and abetted, ISIS, conduct forbidden and made actionable by the AntiTerrorism Act.

Doubtless, the particulars of these cases raise complex questions of liability that even many critics of 230’s too-broadly applied immunity may have difficulty defending on all merits. Google’s response, for instance, states that the “ATA claims in this case have produced a procedural morass.” Nevertheless, the Court agreed to review, having denied all Section 230 petitions in the past, leaving some to note that Justice Thomas signaled a strong interest in Section 230 immunity in a brief respecting denial of certiorari in the 2020 case Malwarebytes v. Enigma Software Group. There, Thomas wrote:

Adopting the too-common practice of reading extra immunity into statutes where it does not belong, courts have relied on policy and purpose arguments to grant sweeping protection to Internet platforms. . . . Without the benefit of briefing on the merits, we need not decide today the correct interpretation of §230. But in an appropriate case, it behooves us to do so.

I will leave it to others to discuss whether Gonzalez is the right vehicle to address the most chronic harms caused by overbroad readings of 230—or to speculate exactly what this Supreme Court is looking to achieve in light of the politicized narratives and misstatements that have attached to public discussion about the statute.

Until the Trump administration turned the White House into the Ministry of Misinformation, Section 230 was not mainstream news, and one consequence of those events is that the provision has been misrepresented as a content (i.e., political) neutrality law, which it is not. Though, as discussed in the post linked above, the neutrality rhetoric is a misconception Big Tech promoted itself years before Members of Congress started alleging “anti-conservative” bias and conflating that talking point with threats to abolish Section 230.

But I wanted to focus on the narrow question presented in the Gonzalez petition, which is whether “targeted recommendations” made by interactive computer services are properly immunized. Because whatever the outcome of this case—and if there is any chance that Congress might effectively amend Section 230—both the Court and lawmakers should reject the too-friendly term “recommendation” to describe how algorithms on major platforms are designed to attract and retain user attention.

It is now a matter of record that algorithms trained to adapt to user behavior and feed what may be our worst instincts is an often-toxic phenomenon that is not adequately described by the word “recommendation.” Interaction between the social platform and the human user is not comparable to reading a book review or hearing a friend’s suggestion to see a show or even having Netflix indicate that if you liked movie A, you might like movie B. These positive social transactions are analogized by Big Tech to describe its systems and models in the same way the industry invokes other socially constructive words like “share,” “connect,” and “democratize” while papering over hazards like IP theft, harassment, and the wildfire spread of misinformation.

Google’s Response Begs for Scrutiny

Notably, in Google’s response asking the Court to deny cert in Gonzalez, it practically admits to the insidious nature of algorithmic “recommendation” when it emphasizes the fact that the courts have held that search engines are protected by 230—and that search is comparable to “recommendation.” Here, Google inadvertently highlights the reason search sucks now—because rather than return results based on a reasonably objective definition of “relevance,” the Google search algorithm has been tweaked to return results “of likely interest” to the user based on what Google has learned about them.

I doubt I am alone in finding that search results are consistently less useful than they were just a few years ago—even to the extent that the most logical result (e.g., an entity’s website) appears on page two or three, where it used to at least be the first or second item below the top three paid placements. But on a darker note, Google’s brief practically acknowledges that if the user is an anti-vaxxer or an election denier or believer in some other conspiracy nonsense, they will be served search results likely to reinforce those false narratives. Whatever we want to call this phenomenon and its psychological influence, it is too quaint by some margin to call it “recommendation.”

But even if Google Search still functions in a way that is properly immunized by Section 230 (and I would question that as the technology changes), we confront a whole other level of insidious power to influence with the combination of Google or Facebook’s algorithms and the capacity of video to tap into emotions—especially strong emotions like anger and fear. The notion that the fundamental design of YouTube does not foster a symbiotic relationship between the potential terrorist and the recruiting video is barely plausible. But for sure, it is a phenomenon Congress did not consider in 1996 when it adopted Section 230.

Argus is Allegedly Blind

When it comes to marketing, Google et al. boast the capacity to know what a user is going to buy, how she’ll vote, or what she’ll order for dinner—even minutes before she knows these things herself. But when the conversation turns to liability, these same companies suddenly cannot know much of anything. While Google is probably correct that there are several complicating aspects in the Gonzalez complaint, it also downplays the efficacy of a platform like YouTube to convert latent emotions into dangerous action.

Whether that action is joining ISIS and murdering tourists or joining a mob and attacking the U.S. Capitol or breaking into the home of the Speaker and attacking her spouse, I think we have sufficient evidence to conclude that insane narratives are running amok and driving people to extreme behavior with deadly consequences. Google et al. may not bear direct responsibility for these events—surely, terrorism existed long before the internet—but neither are these platforms mere hapless conduits incidentally fueling the fire. And again, Google almost acknowledges this in its reply brief.

“…since the 2015 Paris attack, YouTube has overhauled its terrorism policies, as one of petitioners’ sources recognizes,” the brief states. Oddly, Google cites a WSJ story which reports that despite changes by the platform, YouTube still “Drives People to the Internet’s Darkest Corners.” More acutely, if YouTube attempted to change its algorithm and/or its policies in response to the Paris attacks, this suggests that a nexus does exist between platform “recommendation” and videos that are likely to motivate violent action. This level of interaction between user and machine, which serves the platform’s interest more than it does the public interest, was neither envisioned nor discussed at the time 230 was adopted.

Circa 1996, the analogies were limited to human publishers who make decisions about what to disseminate, cut, or edit. But those points of reference are woefully incomplete for understanding contemporary data mining and the manner in which algorithms produce real-world events. Thirty years ago, we were talking about this stuff with the expectation that the network might recognize that you’re in the market for a toaster and will show you some ads for toasters. But when toaster shopping feeds an advanced algorithm capable of intuiting that you might be interested in all the videos that will “prove” how the Jews are running the world or that Yoga is Satan worship, that is a very different creature than a “recommendation” machine.

So, as the Court considers whether “targeted recommendations” are properly immunized by Section 230, we should hope that it recognizes how tepid that term is for describing the state of the technology, which behaves nothing like Congress’s understanding nearly thirty years ago. Whatever the proper term should be, it is implausible that Congress intended to provide blanket immunity for a business model that, even occasionally, fuels riots, terrorism, harassment, nonconsensual pornography, rampant misinformation, and even genocide. Surely, these cannot be acceptable byproducts of the most ambitious or prosaic uses of the internet.

Tedious Anti-Copyright Stance of EFF is Not About Protecting Anyone

Welp (as the kids say), it looks like Katherine Trendacosta of the Electronic Frontier Foundation (EFF) found an old PowerPoint deck from 2012 and used it to write a new post ominously titled Hollywood’s Insistence on New Draconian Copyright Rules Is Not About Protecting Artists.

Typical of the EFF playbook, Trendacosta devotes an entire post maligning the motion picture industry rather than address the “rule” (the SMART Act), which she does not even mention until the final paragraph. At that point, the reader is meant to take her word for it that the proposed legislation is bad because—believe it or not—there is too much diversity and choice in the streaming market, and because film producers want to make money.

Ms. Trendacosta calls streaming a “hellscape” where consumers cannot find what they want and/or where shows and films are canceled or moved to different platforms. She writes, “It’s disingenuous for Hollywood’s lobbyists to claim that they need harsher copyright laws to protect artists when it’s the studios that are busy disappearing the creations of these artists.”

“Hellscape” is a bit dramatic as critiques go, given that market research indicates that 74% of consumers report being satisfied with streaming and that those numbers are currently trending upward. Of course, the anti-copyright playbook Trendacosta is using tells her to imply that when producers make market decisions to stop producing a given work, or to move a work from one channel to another, this is “disappearing” material that should be available in perpetuity. In fact, she inscrutably cites the “disappearance” of a film which is temporarily being made available in a new 4K cinema format and will return to streaming in a matter of months. Hellish, no?

Perhaps Trendacosta is unaware that we are enjoying a new golden age of filmed entertainment available on—or produced especially for—the private screen market. Streaming models have fostered a diverse range of projects that would never have been made, let alone been sustainable, in the narrower distribution paradigms pre-Netflix. But a reality of all this bounty is that more experimentation and risk-taking means that a higher volume of material will be canceled or redistributed more frequently as audiences respond to what gets made. That’s just the business of making entertainment media, and the EFF always acts as if the business is what makes efforts to mitigate piracy somehow dishonest or sinister.

Here, Trendacosta digs a little deeper into the big box of EFF’s toys and argues that ordinary tensions that arise among studios and talent—including strikes and financial disagreements—are evidence that the parties seeking remedies to piracy “don’t care about artists.” True to form, the folks at EFF pretend to care about artists by erecting a false dichotomy between the creators who work on projects and Hollywood, where “Hollywood” is a generic term to describe a monolith that does not exist.

It’s a very strange argument because the artists to which Trendacosta refers in those strikes, etc., want money, too. In fact, money is often exactly why they have disagreements with certain producers or studios. Yet, Trendacosta elides the fact that piracy hurts everyone in the ecosystem, regardless of their internal disputes and negotiations with one another. That’s why unions like DGA, WGA, and IATSE are members of the Copyright Alliance and work closely with the studios to fight piracy. It is categorically false to suggest that large studios are the only parties with an interest in this issue. As independent filmmakers and other artists have explained repeatedly, it’s the smaller, independent projects that are most vulnerable to the negative effects of piracy.

And let’s be honest. EFF opposes all copyright enforcement measures in the same style as this post—no substance, just uninformed, ad hominem attacks—and it behaves no differently when smaller groups or indie artists seek copyright remedies in Congress.

So, what is the supposedly “harsh” new piracy remedy that EFF is opposing this time?

The Strengthening Measures to Advance Rights Technologies (SMART) Act is a legislative response to the fact that for more than 25 years, Big Tech has refused to fulfil its side of the bargain struck with the adoption of Section 512 of the DMCA. Simply put, Section 512(i) requires online service providers to collaborate with copyright owners to develop standard technical measures (STM) to identify and expeditiously remove infringing content from internet platforms.

But not only did the development of STM never quite happen, the Googles and Facebooks of the world, who came after the OSPs that negotiated the DMCA, benefitted from mass infringement on their platforms because the DMCA shielded them from liability.

SMART seeks to address more than two decades of stonewalling by adding a new Section 514 to the DMCA that would create new remedies to confront Big Tech’s refusal to adopt appropriate and affordable technical measures to reduce online piracy. At the same time, its proposals would protect smaller and less well-resourced service providers by calling for a variety of tailored and practical technical measures to be developed under a multi-stakeholder process overseen by the Librarian of Congress.

This is what the EFF is calling “draconian”—a proposal to restore the intent of the DMCA as it was enacted in 1998. SMART is the first substantive response to Big Tech’s two big lies: 1) We can’t do it; and 2) We shouldn’t do it because it will chill speech. Those arguments have worn paper thin in recent years given the role these same companies have played in fostering the most toxic, Republic-shaking nonsense ever to be “freely spoken.” But credit where it’s due. At least Ms. Trendacosta didn’t say SOPA.

Climate Disaster: A Rough Decade

This month is the tenth anniversary of The Illusion of More. Specifically, I believe the site launched on August 12, but I did not know what, if anything, I wanted to say to mark the occasion other than to thank readers for following and supporting the blog for a decade. And I am very grateful for that. But in light of the editorial focus of this blog and the state of the world, ouch. It’s been a rough ten years.

I asked in the intro to the first podcast in 2012 (an interview with journalist Christopher Dickey) whether digital technology was making things “suck faster,” whether the illusion of more access, engagement, and information would simply make otherwise reasonable people more rapidly and more virulently misinformed. For one contemporary answer to that question, read Francesca Tripodi’s recent article in Wired describing how Google’s changes to its “neutral” search engine can prioritize false information and reinforce a psychological vulnerability she calls the “IKEA effect,” (i.e., taking pride in something one has assembled). Tripodi writes:

Conspiracy theorists and propagandists are drawing on the same strategy, providing a tangible, do-it-yourself quality to the information they provide. Independently conducting a search on a given topic makes audiences feel like they are engaging in an act of self-discovery when they are actually participating in a scavenger-hunt engineered by those spreading the lies.

Or for a lighthearted version of the same principle, Craig Ferguson says in his Netflix special, “Tweet it, retweet it, retweet it again—fuckin’ true.”

As explained and reiterated in many posts on this blog, what began as a response to the lies and flood-the-zone tactics deployed in the anti-SOPA campaign of 2011/12 quickly encompassed a much broader concern about the major internet platforms (Big Tech) as a dangerous force that just might swallow democracy itself. This was not a popular view in 2012. Both official policy and public sentiment were predicated on a blind faith that more speech without restraint (i.e., direct democracy) had to be a good thing. That fallacy was central to rejecting the anti-piracy legislation just over a decade ago, and it persists today in, for instance, the Internet Archive’s rationales for its brand of book piracy.

Big Tech and its network of mostly left-leaning organizations said that harmful speech—from personal harassment to raving conspiracy—would be mitigated and safely marginalized by a fresh, invigorated dialogue enlightened by open access to information. Standing in the way of that utopian vision, they insisted, was “the government” in cahoots with corporate “gatekeepers” like the press, publishers, and Hollywood—all wielding the cudgel of copyright law to control what we are allowed to learn or experience. Meanwhile, the words of the prophets were written on the social media walls.

To suggest that, we seemed to be entering an age when information would be indistinguishable from bullshit, was to earn the title “luddite.” Even now, despite the overwhelming and terrifying events that have occurred in last ten years—all of it based on free access to deep wells of bullshit—the tech-utopians still believe in the illusion of more as surely as climate-change deniers refuse to see the science in the global havoc unfolding daily.

But lest anyone think that conspiratorial delusions are exclusively the opiate of the Trump cult, I would ask readers to remember the climate changes in our politics that were taking place before the Tiki Torch parade began. When the Ed Snowden story broke, and my friends on the left went nuts about those revelations, I wrote a post recommending calm, in which I opined, “While oversight is an essential, and believe it or not still extant, component of the American system, a universal and unwavering distrust in ‘the government’ is tantamount to distrust in one another, and this is the cancer that grows into a malignant threat to civil liberty.”

And here we are, witnessing real threats to the constitutional order of the United States, as the Former Republican Party (FRP) is consumed by a cult of personality, surfing waves of bullshit about the most basic mechanisms of government and law enforcement. On the other side, we share memes lampooning the “law and order” party for shrugging off credible threats to attack the FBI, the Attorney General, and a federal judge, but perhaps we choose to forget that this same conspiratorial rhetoric, comparing the American justice system to the KGB et al., was more universal before the election of 2016.

Like watching glaciers melt and rivers evaporate, it is easy to think that the erosion of trust in core institutions is beyond repair–that it is really just a question of who is doing the distrusting. And to believe that social platforms are not an underlying cause of this harm is as willfully ignorant as believing that easy access to firearms is not the key ingredient in mass shootings.

Social media offers some nice features, but on balance, it has made everything suck faster. It is a hallucinogen that produces twin chimeras named Information and Engagement, who gnaw on Common Sense and Humility until Narcissism and Arrogance prevail. Take for instance, this little collage made from responses to photographer Jeff Sedlik’s copyright lawsuit against tattoo artist Kat Von D:

I draw your attention to both the ignorance and the style in this hatecloud—not because it is rare, but because it is common to the point of predictable. This is how we talk now about almost everything. Those comments were made by ordinary individuals, probably decent people most of the time, but who would be unlikely to admit that they know less than nothing about the law or about Sedlik and his motives. And all that rancor directed at one individual, empowered by the technology designed to “connect people,” is just a response to a little copyright case. So, can we really be surprised that, by means of the same tech exploiting the same psychological frailties, tens of millions of people are easily duped into believing that an election was stolen, or that the U.S. Justice Department is indistinguishable from the Stasi?

In 2012, in that same intro to the first podcast, I quoted Mark Twain who said, “It’s not what you don’t know that gets you into trouble. It’s what you know for sure that just ain’t so.” A keen observer of human nature, Twain foretold the Big Tech Lie that is still flooding the zone with millions of other lies, which, like too much carbon in the atmosphere, may yet make the world uninhabitable.


Photo by: ole999