Can We Hope to Sensibly Reform Section 230?

In a paper published in 2020, [1] scholars Danielle Keats Citron and Mary Anne Franks advocate a relatively modest and elegant approach to amending Section 230 of the Communications Decency Act of 1996—changes that would directly help the statute’s unintended victims—but it is difficult to imagine how any nuanced consideration of the 230 issue will make headway in the current political climate.

At one extreme, the Former Republican Party (FRP) has amped up “Repeal 230” into a buzzy talking point with no practical or legal merit whatsoever; while shouting from the other side of the vortex is the internet industry and its network of supposedly progressive groups, who insist that the status quo of 230 is the keystone in the entire internet ecosystem. One behavior these seemingly opposite forces have in common is that both have exploited the misconception that Section 230 has something to do with viewpoint neutrality. It does not. Neither by the letter nor the spirit of the law.

To recap, there are two main parts to Section 230. The first states that online service providers will not be considered “publishers” of material provided by other parties. So, whether you or I or the NYT posts something on Facebook that is potentially harmful, and also unprotected speech (e.g. defamation), Facebook is shielded from potential liability resulting from that material. The second part, known as the “Good Samaritan” clause, states that when a platform engages in moderation and removes “objectionable material,” this does not render the platform a potentially liable “publisher” either. And it does not matter whether “objectionable material” comprises illegal content (e.g. child porn) or simply material the platform proscribes according to its own terms of service.

Nothing in the 230 statute states, or even implies, that service providers are limited by the speech right—indeed, as private entities, it is their First Amendment right to moderate as they wish—or that they are obligated to maintain viewpoint neutrality as a condition of the liability shield. That said, it was the platform operators themselves who promoted the false narrative that social media sites are the shiny new “engines of speech” right up until 2016, when “objectionable material” (mostly in the form of dangerous misinformation) steadily became the largest plank in the platform of what used to be the Republican party. Meanwhile, the real victims of Section 230’s unintended consequences may continue to be ignored amid the storm of insanity encircling this one fragment of cyber law.

Simply put, Section 230 is the reason why online platforms may not be held liable when their operators host, or even encourage and monetize, any of the following:  nonconsensual pornography, child sexual abuse material (CSAM), libel and defamation, hazardous misinformation, organized hate groups, harassment, or incitements of violence. And while vested interests play rhetorical games with the allegedly blurry lines between speech and any of that material, Citron and Franks first advocate clarifying that ambiguity by striking the word information from part one of the statue and replacing it with the word speech.  “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” states their paper.

Unlike “information,” protected “speech” has a legal definition rooted in case law, and at least some of the aforementioned categories of material would never qualify as speech under legal scrutiny, while others (e.g. hate speech) would be subject to review on a case-by-case basis. Perhaps most importantly, what this single word change likely accomplishes for, say, victims of harassment, is that it would more frequently induce a platform to remove harmful material, either voluntarily or by court order, rather than choose to litigate to try to prove that the harmful content is protected speech. As things stand, almost everything online is presumed to be speech. So, if a party uses any intermediary, from Twitter to a dating app, to cause even severe harm to another party, the intermediary is under no obligation to provide relief by removing the content. And most courts have held that 230 supports this position.

Under this one-word revision, if a platform knowingly continues to host allegedly actionable material, the platform voids its presumption of immunity, which does not mean it is necessarily liable for any harm. A complainant still bears the burden to prove the merits of a complaint just like any other case, but the platform would not automatically be indemnified at the summary judgment phase of a case. Meanwhile, the only form of relief many complainants ever want is removal of the harmful content, and not necessarily a damage award from a platform that otherwise does the right thing.

In that regard, if a platform unknowingly hosts potentially actionable content, as almost any platform inevitably does, Citron and Franks advocate another modification to 230, requiring that a platform demonstrate that it maintains a “reasonable,” ongoing practice of removing objectionable material upon notice or independent discovery of the problematic content. [2] “If adopted,” their paper states, “the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices in the face of unlawful activity that manifestly causes harm to individuals.”

This reasonableness standard would presumably accomplish two things:  first, it would provide the many platforms operating in good faith with the kind of liability protection intended by Section 230; and second, it immediately voids the liability shield for those platforms that intentionally operate as Bad Samaritans. Sites that purposely trade in libel and defamation, nonconsensual pornography, harassment (and quite possibly hate-speech and incitements to violence) would no longer be able to duck behind the Vibranium shield they have been wielding to avoid being named parties in a litigation. In many cases, this requirement to demonstrate a “reasonable” moderation policy would probably obliterate the business models for sites that intentionally profit from the misery of others, and I fail to see a downside in that outcome.

Of course, amending 230 requires an act of Congress, and there’s the rub. Not only will Silicon Valley throw its considerable resources at campaigns to leave the statute untouched until doomsday, but step one proposed by Citron and Franks—replacing information with speech—runs head-first into the existential crisis we currently face as a nation. Political speech is paradigmatically protected speech, arguably the most sacred of all forms of protected speech. But at present, one party has decided that its political speech shall embrace an insurrection of lies, outlandish conspiracy theory, and even violence against the very foundation on which the speech right itself is written. Whether we survive that paradox is a much bigger question than internet governance, but for the everyday victims of Section 230, it would be grand if we could address what is legitimately wrong with this law.

[1] “The Internet as Speech Machine and Other Myths Confounding Section 230 Reform,” University of Chicago Legal Forum (12/01/2021). https://legal-forum.uchicago.edu/publication/internet-speech-machine-and-other-myths-confounding-section-230-reform

[2] As the paper states, this proposal originates with Citron and colleague Benjamin Wittes.


Vortex image by: sondem

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)