Cyber Civil Rights Initiative Files Common Sense Brief in Major Section 230 Case

In my recent post about Gonzalez v. Google—the Section 230 case granted cert by the Supreme Court—I expressed the view that the word “recommendation” is too charming to describe the interaction between social media algorithms and many users’ experiences. Systems capable of reinforcing suicidal ideations in a teenager or stoking violent instincts in a potential terrorist cannot sensibly be described as “recommending” the kind of content associated with these and other dangerous outcomes. And although petitioner Gonzalez specifically asks the Court to decide whether “algorithmic recommendation” is shielded from liability under Section 230 of the Communications Decency Act, the amicus brief filed by the Cyber Civil Rights Initiative (CCRI) and Legal Scholars asks the Court for a more nuanced reading of the question. From the brief…

Amici emphasize that this case cannot be correctly decided by focusing on “traditional editorial functions” or by trying to craft a general rule about whether “targeted algorithms” fall within Section 230’s immunity provision…. To categorically deny immunity to an ICSP for using targeted algorithms would directly contradict Section 230(c)(2) and finds no support in Section 230(c)(1). Such an interpretation would also have a devastating impact on the victims of online abuse by dissuading Good Samaritan ICSPs from using targeted algorithms to remove, restrict, or otherwise reduce the accessibility of harmful material, including nonconsensual pornography.

CCRI, which works to address and remedy various forms of harassment and civil rights abuses committed via interactive computer service providers (ICSPs), asks the Court to restore the textually coherent and common-sense meaning of Section 230, which was written to encourage service providers to mitigate harmful material—not to unconditionally immunize them from liability for hosting it. For almost twenty years, lower courts have consistently misinterpreted the purpose of 230 to provide automatic immunity just so long as the material at issue is posted by someone other than the platform owner/managers.

This chronic misreading of Section 230 results in two significant problems: 1) dismissal at the summary judgment stage of any claim in which an ICSP may be liable; and 2) failures to provide injunctive relief where the ICSP is not liable but may be ordered to remove material which the court agrees is causing harm to a complainant. As things stand, a site that intentionally trades in harmful material is immunized, and so is a site that unintentionally hosts harmful material but elects not to remove the material for its own reasons. The rationales vary as to why “neutral” platform operators often refuse to remove material alleged, or even proven, to be harmful, but for too long, the industry has echoed the absurd premise that removing anything from a social platform is incompatible with “a free and open internet.”

Section 230 Is (Was) Not Novel Legislative Territory

The CCRI brief is so firmly grounded in the legislative history of Section 230 that it is difficult to fathom how any court—let alone many courts—strayed so far, and for so long, from a plain-text reading of the statute. In describing the common-law (i.e., not groundbreaking) underpinnings of Section 230, for instance, CCRI cites the distinction between a “publisher” and a “distributor” of defamatory material thus:

… “[d]efamation at common law distinguished between publisher and distributor liability.” While a publisher was strictly liable for carrying defamatory matter, a distributor who only “delivers or transmits defamatory matter published by a third person is subject to liability if, but only if, he knows or has reason to know of its defamatory character.” [Emphasis added.]

This is common sense well founded in law. If an individual or a business has knowledge that he/it is facilitating harm caused by a separate, directly liable party, that facilitation may rise to a secondary civil or criminal liability. The newsstand operator is not liable for inadvertently selling adult magazines containing underage models, but if he knows about it, he is probably—and deservedly—in big trouble.

This basic principle of secondary liability applies everywhere except for internet platforms—and only because the courts have so thoroughly misconstrued Section 230 by conflating two sub-sections of the statute, which are meant to be read independently. As the CCRI brief explains, 230(c)(1) states that merely providing access to third-party content (e.g., YouTube hosting a video uploaded by a user) does not make the ICSP a “publisher” or “speaker.” Then, 230(c)(2) states that voluntarily making a good-faith effort to remove objectionable material does not make the ICSP generally liable as a “publisher” of everything it hosts.

“Cases reading Section 230 to have a broader preemptive effect than provided for in (c)(1) and (c)(2) have departed from the statutory text,” states the CCRI brief. It emphasizes the fact that “distributor liability” is envisioned by Section 230(c)(1) where the ICSP has knowledge of the harmful material, and it argues that the function of Section 230(c)(2) is legislatively “parallel” to state Good Samaritan laws written to immunize ordinary citizens against unreasonable liability when we make good-faith efforts to help someone in need of assistance. Prior to these laws, an individual intending to render aid to a stranger could be held liable for inadvertently causing harm, but as the CCRI brief states:

… like state Good Samaritan statutes, Section 230(c)(2) includes important limits to the immunity it provides. First, it does not apply when an ICSP is already under an existing duty to act—i.e., where its action to restrict access to objectionable third-party content is not “voluntary.” Nor does it immunize ICSPs that do nothing to address harm or that contribute to or profit from harm.

Again, this is just common sense grounded in common law that applies everywhere except the internet. If one does not initiate illegal activity but seeks to benefit from that activity, one may be liable for the harm caused. It is inconceivable that Congress ever intended to exempt the multi-billion-dollar internet industry from this longstanding principle. And that’s because it intended no such thing.

It will be interesting to see what amici who file on behalf of Google will argue in this case. Other than the usual panegyrics to the internet, I am curious to see whether, for instance, the EFF will have anything coherent to say in defense of two decades’ worth of textual misreading. Typically, defenders of the status quo reading of Section 230 write about threats to “the internet” as if a lack of immunity automatically results in a finding of liability and damages. But on the contrary, a proper reading of the law simply means that an ICSP cannot so easily dismiss every claim and that the injured party is allowed her day in court to prove whether a platform had, or has, a duty to act. Litigating against tech giants is hardly a fair fight in the first place, and ICSPs neither need nor deserve an unconditional immunity that exists nowhere else in the justice system.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)