A New Unfortunate Twist in the Section 230 Saga: Politics

This blog contains several posts questioning the premise that the ISP liability shield known as Section 230 of the Communications Decency Act of 1996 is a “sacred” law, without which the internet would cease to do all the wonderful things it does. Like foster a more rational, diplomatic, and thoughtful political climate around the world. (Because that’s going so well.)

What began as an incentive for platform responsibility became the legal basis for a lot of platform irresponsibility over the past two decades. In this regard, I and others have pointed to specific instances of tangible harm done via the web and the general lack of cooperation by platform owners to remove, demote, or delist harmful content—even when ordered by a court to do so.

Most recently, the Hassell v. Bird case (see posts here and here), illustrates how far the judicial application of the Section 230 liability shield has strayed from its intent. In fact, it is so extreme that one part of Yelp’s defense in this case boils down to the following logic: Although Section 230 was created to spare platforms undue litigation, because Yelp was not a named party in Hassell’s complaint, they argued denial of due process because they weren’t given the chance to litigate. To underline this point, the action a California court had ordered—to delete content held to be defamatory—would have cost the company nothing, which is consistent with the original purpose of Section 230.

In July 2017, I summarized an academic paper describing specific ways in which Section 230 has had the unintended consequence of shielding some very bad actors online; and the authors of that paper even recommended subtle tweaks to the statute which might mitigate the kind of harm being done, including language that prefigures FOSTA, signed in April of this year.

As that amendment exemplifies, Section 230 was never intended to foreclose all possibility of civil or criminal remedy merely because harmful conduct occurs in cyberspace. At the same time, while criticizing the absolutism of 230, I also recognize the difficulty inherent to the unprecedented paradigm of social media—that these are privately-owned, public spaces with the sole purpose of hosting users’ speech.

Unfortunately, any such nuanced discussion has historically been overwhelmed by the industry and its well-funded “activists,” who claim that the status quo of the 230 statute is “sacred,” that even the slightest adjustment will undermine the core functioning of the internet and threaten speech online. And “sacred” is exactly what Issie Lapowski called the statute in her recent article in Wired, which claims that lawmakers don’t understand the nature of the statute they’re threatening to “gut.” While many of us clearly do not agree that 230 is quite so inviolate, Lapowski’s article points to a new twist in this tale that can only add a new layer of confusion to an already complex issue: partisan politics.

Back in November of 2017, hearings at the House Judiciary Committee were generally bipartisan in tone, investigating the manner in which Russian agents bought American political ads on social platforms. Both Democrats and Republicans specifically recommended that the representatives of Google, Facebook, and Twitter drop the longstanding rhetoric that they operate “neutral platforms” on which they bear no responsibility for the content posted by users. All three representatives had little choice other than to concede, in testimony anyway, that their laissez-faire policy had gone too far—and this was before evidence emerged linking Cambridge Analytica, Russian troll farms, and Facebook user data.

Section 230 was naturally a running theme during those hearings because most lawmakers do understand that the statute is the primary legal foundation on which platforms assert their neutrality. But in more recent hearings held last week, Republicans on the House Judiciary Committee amped up accusations that the major sites engage in partisan bias—asserting that they remove or demote “conservative” content while leaving up “liberal” content.

In this context, Lapowski quotes Rep. Matt Gaetz (R-FL), who stated “When you avail yourself to the protection of Section 230, do you necessarily surrender your rights to be a publisher or speaker? The way I read that statute now, it’s pretty binary. It says you have to be one or the other.” In other words, Republicans on the Committee acknowledge that the platforms have a First Amendment right to advance or demote any content they want, but doing so makes them “publishers” and theoretically nullifies the Section 230 immunity.

In response, Lapowski cites attorney Eric Goldman whose explication of Section 230 is more accurate than the congressman’s, even if it is somewhat pollyanna about the manner in which the statute has been applied in practice. Goldman correctly points out that Section 230 cannot logically vitiate a platform’s First Amendment right to control content when the very purpose of the statute is to encourage sites to control content. Unfortunately, that principle has too-often been argued in reverse—as a right to leave content online even when it is harmful or held unlawful by a court.

Gaetz’s line of inquiry caught my attention, though, because I said almost the same thing in my first post about Hassell, albeit in a very narrow and apolitical context, believing that 230 should not immunize Yelp against complying with a court order to remove unprotected speech. Nevertheless, if indeed Gaetz and his colleagues are threatening the platforms with “gutting” 230 as a response to alleged political bias, the complexity of this discussion just went to eleven.

Legally, socially, and politically, the whole subject of platform responsibility becomes disturbingly muddied amid accusations of “partisanship,” especially in a climate in which too many mainstream Republicans have lately embraced content that any reasonable person of any political party should find objectionable. For instance, if Facebook were to drop the Infowars page, would House GOP members consider this anti-conservative bias? I ask because it was not that long ago when people seemed to know the difference between a conservative like George Will, and a tinfoil-hat-wearing sociopath like Alex Jones.*

This, of course, has been one of the “benefits” of democratization through internet technology: it has coalesced, legitimized, and, most importantly, monetized crazy people. What we used to call the “lunatic fringe” of both the right and the left has now moved into the center. We’ve entered a new reality in which American citizens not only want to thank Vladimir Putin, if indeed he meddled with the election in Trump’s favor, but they have the means do declare this insanity in public and to build solidarity with other citizens who are likewise deluded. Or are these even American citizens at all? Are they Russian trolls being paid to make more mischief? Or bots? We have no idea.

What a bipartisan Congress ought to be able to recognize at this point is that a completely unfettered internet (i.e. one without platform responsibility) has not yielded a stronger, more stable, more rational body politic. To the contrary, even as platforms claim that they’re taking more responsibility, there may be no ameliorating the kind of factionalism and mob mentality that the internet fosters so perfectly, and which the American Framers feared so presciently.

We’re living in a reality where reasoned debate on almost any issue is consumed by the circus—by forces that are visibly hammering at the foundations of the Republic—and the last thing we need to inject into a policy discussion about platform responsibility is the rhetoric of partisanship. The fundamental purpose of Section 230 remains sound while its flaws are fairly nuanced.

And while I would personally love to see Facebook remove Infowars and Antifa** for the sake of sanity, a dubious narrative accusing social platforms of political bias cannot be the proper framework for reasoned discussion about the flaws in Section 230. It should instead be the aim of representatives in both parties to address the specific mechanisms by which a law written to motivate “good samaritans” has too often shielded bad ones.


*UPDATE:  As of July 27, 2018, CNN reports Facebook has suspended Jones’s personal profile page.

**At the time of writing, this referred to the militant, violent factions worldwide that identify themselves as Antifa. It has since become a much more muddled identification.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)