Sen. Cruz Brief Wrongly Portrays Section 230 as a Neutrality Law

Among the briefs filed in Gonzalez v. Google asking the Supreme Court to properly read Section 230 of the Communications Decency Act is one filed by Sen. Ted Cruz, Rep. Mike Johnson, and fifteen other Republican Members of Congress. Presenting similar textual arguments as the brief filed by Cyber Civil Rights Initiative (CCRI), highlighted here in a recent post, Sen. Cruz et al. petition the Court to address a matter that has nothing to do with Section 230—a politically motivated complaint summarized as follows:

Confident in their ability to dodge liability, platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions in the text of §230.

Allegations of viewpoint bias are inaptly raised in Gonzalez, or indeed any case addressing Section 230. Even if it could be shown that a social platform actively engages in true political bias (i.e., moderating ideas and speakers rather than extremism), this is not a Section 230 issue because neither political nor any other form of bias necessarily implicates civil liability for an online platform any more than it does for a newspaper or TV network.

The First Amendment protects bias, and Section 230 does not alter this fact. Hence, the Cruz brief strays far from the purpose of the Court’s review in Gonzalez by erroneously implying that bias is inherently grounds for litigation when it alleges that the overbroad interpretation of Section 230 immunity causes or sustains politically motivated censorship. But Section 230 is not and never has been a viewpoint neutrality law. Cruz et al. are asking the Court for a misreading that has lived in the PR of the platforms and the rhetoric of tech-utopianism, but is nowhere in the statute.

Specifically, the Cruz brief alleges that the platforms have been shielded in censorious conduct by a poor statutory reading of their right to “good faith” removal of material that is “otherwise objectionable.” The amici argue that those words must be read in balance with the preceding words in the statute providing immunity where platforms remove or restrict access to third-party content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The brief then asserts (a bit wryly) that “…conservative viewpoints on social and political matters do not rise to the level of being ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”

Notably, the brief both elides a definition of “conservative” in context to its argument and asks the Court to read Section 230 as a mandate that platforms leave all material online that does not meet a very narrow, statutory definition of “objectionable.” This is false. Section 230 was written to encourage platforms to adopt and enforce their own community standards (i.e., decide what is objectionable), which does not disturb the general right to host a platform which may be politically biased in any direction. A proper reading of 230 simply means that platforms shall not be unconditionally immunized against potential liability for hosting content that results in some form of harm which may be remedied through civil litigation.

The Cruz brief does not distinguish between amici’s political bona fides and the broad spectrum of hate-speech and violence-inciting material that some Americans now call “conservative,” and which is indeed problematic for platforms. For instance, as the Alex Jones verdicts or the January 6th convictions make clear, material that certain people are willing to label “conservative” may, as a matter of law, be libel, defamation, or disinformation that results in individuals being harassed and threatened, or which leads to violence—or even an insurrection. And it is precisely in this context (i.e., blurring the line between political views and actionable conduct) that the complaint in the Cruz brief is so inaptly raised in Gonzalez.

Petitioner Gonzalez alleges that Google’s “recommendation” algorithms contributed to fostering terrorist activity by promoting ISIS recruiting videos in a manner that predictably roused a latent terrorist, who then acted on those emotional triggers. Regardless of whether that complaint prevails in context to the anti-terrorism statutes at issue, the general allegation about the platform’s role in Gonzalez is indistinguishable from an algorithm detecting that a user likes InfoWars and, therefore, “recommends” QAnon videos or some other tinfoil-hat material with the foreseeable result that some domestic terrorist will assault a family in Sandy Hook, ransack the Capitol, conspire to kidnap a sitting governor, etc.

Thus, if the Court agrees that Google is not shielded from litigation in Gonzalez, the allegations of liability presented, even if they do not prevail in that instance, are directly analogous to an Alex Jones or an election-denier issue for a platform moderation team. Even if Google is ultimately not found to be liable for the ISIS-related killing of Nohemi Gonzalez, allowing the case to proceed past the Section 230 veil will demonstrate that there is a plausible, common-sense nexus between amplification of certain material and harmful conduct.

Under a correct reading of 230 (i.e., no unconditional immunity for platforms), the platforms may be more effective in addressing inciting material—a goal that should have bipartisan support from lawmakers interested in both a proper reading of the statute and the general welfare of the nation. Unfortunately, this political monkey wrench in the Section 230 issue is part of a broader narrative in which social platforms have allegedly tilted the scales—but in favor of extreme right-wing material calling itself “conservative.” For instance, in February 2021, BuzzFeed reported:

Internal documents obtained by BuzzFeed News and interviews with 14 current and former employees show how the company’s policy team — guided by [Republican lobbyist and conspiracy promoter] Joel Kaplan, the vice president of global public policy, and Zuckerberg’s whims — has exerted outsize influence while obstructing content moderation decisions, stymieing product rollouts, and intervening on behalf of popular conservative figures who have violated Facebook’s rules.

This and other reports, including testimony before Congress, reveal a pattern of (if anything) pro right-wing bias at Facebook and other platforms, including evidence that the “anti-conservative” story itself is a fiction promoted by individuals like Kaplan. More specifically, Zuckerberg’s apparent resistance to remove Alex Jones from the platform demonstrates how the chronic misreading of Section 230 would only benefit a Trumpianized GOP that embraces every extremist willing to wear a red hat.

A correct reading of 230 opens the possibility that Facebook could be liable for hosting or “recommending” InfoWars, while an incorrect reading forecloses that possibility at summary judgment. Only one of these interpretations benefits those elements of the GOP who choose to align themselves more closely with that brand of “conservatism.” Thus, consistent with the Trumpian tactic of weaponizing alleged victimhood, the comparatively mild complaint of viewpoint bias in the Cruz brief is political theater—blaming social media platforms for actions that a) they have not taken; b) they have a constitutional right to take, if they want to; c) are unrelated to Section 230 immunity; and d) detract from an important legal question for real victims barred from pursuing relief by misreading the statute.

Cruz and his fellow amici have heard or read testimony from witnesses like whistleblower Frances Haugen, who explained to the Senate Commerce Committee how Facebook consistently put profits ahead of safety, adding, “The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people.” Specifically, Haugen and other former insiders have repeated the theme that extremism has been good for social platforms—that angry users are active users, and active users translates to profit for these companies.

A proper reading of Section 230 will not solve every problem fostered by social platforms, but it can have the effect of forcing platform operators to identify when speech is reasonably linked to harmful conduct and to acknowledge a nexus between addictive algorithm design and illegal activity—from terrorism to “revenge porn.” Very real harms have been shielded and exacerbated by misreading Section 230, and it is this error of law which the Court should resolve. In the process, it should decline to address the subject of viewpoint neutrality as the inappropriate, political side show it is.

What I’ll Be Watching in 2023

T’is the week for year-in-review and/or looking-ahead articles. In that spirit, I recommend posts by Devlin Hartline, Hugh Stephens, and Aaron Moss. And here’s my list with commentary for your consideration:

AWF v. Goldsmith

Everyone in copyright world will be waiting, like Ralphie expecting his decoder ring, for the decision in this case. The highly anticipated question is whether the Supreme Court will provide clear guidance on the meaning of “transformativeness” in the factor one analysis of the fair use test. By invoking this highly subjective concept, follow-on creators have, at times, pushed lower court decisions toward problematic findings—first by finding “transformativeness” in secondary works that encroach on the derivative works right and/or classic instances where licensing is required; and second, compounding these errors by giving undue weight to factor one in the overall analysis.

AWF has argued that any “new meaning or message,” which may be subjectively interpreted by observing a follow-on work meets the definition of “transformative” and is, therefore, outcome determinative for finding fair use. Although, I have opined that this case poses certain difficulties in my view (i.e., that Warhol may have defenses under other principles), I agree that AWF’s argument presented here should be rejected and believe the Court should state that factor one must turn on whether the follow-on work contains at least some modicum of commentary on the original work. Absent such commentary, factor one should favor the copyright owner plaintiff. We shall see what the Court says in the coming weeks.

Hachette v. Internet Archive

Inasmuch as this is a very big case that could go all the way to the Supreme Court, it is almost impossible to fathom how Internet Archive is not destined to be rebuffed on the merits at every turn. What began as a lawsuit in response to IA’s unlicensed distribution of over one-million titles (using the fog of early COVID shutdowns as a rationale) is now a detailed complaint in which the facts imply more than just founder Brewster Kahle’s anti-copyright crusade.

In 2017, I asked whether the good aspects of IA require all the anti-copyright rhetoric in order to exist, and that was presuming Kahle’s well-known opposition to copyright was purely ideological. But some of the details in the publishers’ complaint imply financial interests that belie any pretense that IA is a principled, though misguided, Robin Hood. Expect to see the organization continue to allege that it is “just a library doing what libraries do,” but if this were true, the publishers’ suit would have been dismissed at the summary judgment stage. It should be clear by now that just because you say something on social media, that doesn’t make it true—least of all in a court of law.

The Copyright Claims Board (CCB)

Launched this past June, 2023 may be the year we really start to test the efficacy of the copyright small-claim alternative. For starters, the big question is how many respondents will opt-out of the tribunal. In order for the small-claim option to be constitutional, a defendant (respondent) must voluntarily agree to the proceeding, which led some to reasonably wonder whether the CCB will work at all if every defendant can simply opt out. But that question partly depends on how many plaintiffs are willing to file federal lawsuits, if the respondents are unwilling to resolve the matter at the CCB.

Further, to really understand how things are going at the CCB, we need a volume of cases and more time to allow the process to unfold. The plaintiff has 90 days once her case is active to show proof of service on the respondent, and the respondent has 60 days to opt out of the proceeding. Thus, with fewer than 300 cases filed between June and December this year, we simply do not have a lot of data yet. That said, Rachel Kim at Copyright Alliance posted a blog summarizing what we do know so far, and it’s worth a read.

Artificial Intelligence

I will not attempt to predict where this story goes in 2023, other than to expect that AI will continue to make headlines in the art world and beyond. As stated many times, I personally think AI generated “art” is a useless waste of computing power, but even if every artist and art consumer in the world agrees with that view, it seems unlikely that market failure of the companies behind generative AIs will predate one of these entities getting sued for copyright infringement. Perhaps not this coming year, but before long, expect to see litigation over the question of whether inputting large volumes of protected creative works into these databases amounts to mass copyright infringement or is exempted under the doctrine of fair use. And in anticipation of this battle, both sides of the argument may be scrutinizing the opinion(s) in AWF v. Goldsmith.

Gonzalez v. Google

Not a copyright case, but on the subject of platform accountability, the Supreme Court will finally have something to say about Section 230 of the Communications Decency Act. The decision likely won’t come until 2024, but we will soon see briefs filed on behalf of Google, and oral arguments will be heard in 2023. I recently posted about this case here and here, but suffice to say, it is hard to imagine that the majority will not generally agree that the statute neither states—nor ever intended to state—that online platforms are entitled to the kind of unconditional, broad shield against civil liability the lower courts have granted them for nearly 20 years.

Although 230 is not copyright law, it shares a kinship with the contemporaneous DMCA. Both laws were predicated on immunizing platforms from liability for material posted by users, and although neither law grants these immunities unconditionally, many online service providers—especially the big ones—have wielded these liability shields beyond the limits of reason or anything Congress intended in the late 1990s. Thus, if the Court reigns in the free-for-all applied to date under Section 230, it is conceivable that the opinion in Gonzalez will inform congressional review of the DMCA, which began in 2020.

That’s what I got for this December 30, 2022. See you in the new year!


Photo by: MediaFuzeBox

Cyber Civil Rights Initiative Files Common Sense Brief in Major Section 230 Case

In my recent post about Gonzalez v. Google—the Section 230 case granted cert by the Supreme Court—I expressed the view that the word “recommendation” is too charming to describe the interaction between social media algorithms and many users’ experiences. Systems capable of reinforcing suicidal ideations in a teenager or stoking violent instincts in a potential terrorist cannot sensibly be described as “recommending” the kind of content associated with these and other dangerous outcomes. And although petitioner Gonzalez specifically asks the Court to decide whether “algorithmic recommendation” is shielded from liability under Section 230 of the Communications Decency Act, the amicus brief filed by the Cyber Civil Rights Initiative (CCRI) and Legal Scholars asks the Court for a more nuanced reading of the question. From the brief…

Amici emphasize that this case cannot be correctly decided by focusing on “traditional editorial functions” or by trying to craft a general rule about whether “targeted algorithms” fall within Section 230’s immunity provision…. To categorically deny immunity to an ICSP for using targeted algorithms would directly contradict Section 230(c)(2) and finds no support in Section 230(c)(1). Such an interpretation would also have a devastating impact on the victims of online abuse by dissuading Good Samaritan ICSPs from using targeted algorithms to remove, restrict, or otherwise reduce the accessibility of harmful material, including nonconsensual pornography.

CCRI, which works to address and remedy various forms of harassment and civil rights abuses committed via interactive computer service providers (ICSPs), asks the Court to restore the textually coherent and common-sense meaning of Section 230, which was written to encourage service providers to mitigate harmful material—not to unconditionally immunize them from liability for hosting it. For almost twenty years, lower courts have consistently misinterpreted the purpose of 230 to provide automatic immunity just so long as the material at issue is posted by someone other than the platform owner/managers.

This chronic misreading of Section 230 results in two significant problems: 1) dismissal at the summary judgment stage of any claim in which an ICSP may be liable; and 2) failures to provide injunctive relief where the ICSP is not liable but may be ordered to remove material which the court agrees is causing harm to a complainant. As things stand, a site that intentionally trades in harmful material is immunized, and so is a site that unintentionally hosts harmful material but elects not to remove the material for its own reasons. The rationales vary as to why “neutral” platform operators often refuse to remove material alleged, or even proven, to be harmful, but for too long, the industry has echoed the absurd premise that removing anything from a social platform is incompatible with “a free and open internet.”

Section 230 Is (Was) Not Novel Legislative Territory

The CCRI brief is so firmly grounded in the legislative history of Section 230 that it is difficult to fathom how any court—let alone many courts—strayed so far, and for so long, from a plain-text reading of the statute. In describing the common-law (i.e., not groundbreaking) underpinnings of Section 230, for instance, CCRI cites the distinction between a “publisher” and a “distributor” of defamatory material thus:

… “[d]efamation at common law distinguished between publisher and distributor liability.” While a publisher was strictly liable for carrying defamatory matter, a distributor who only “delivers or transmits defamatory matter published by a third person is subject to liability if, but only if, he knows or has reason to know of its defamatory character.” [Emphasis added.]

This is common sense well founded in law. If an individual or a business has knowledge that he/it is facilitating harm caused by a separate, directly liable party, that facilitation may rise to a secondary civil or criminal liability. The newsstand operator is not liable for inadvertently selling adult magazines containing underage models, but if he knows about it, he is probably—and deservedly—in big trouble.

This basic principle of secondary liability applies everywhere except for internet platforms—and only because the courts have so thoroughly misconstrued Section 230 by conflating two sub-sections of the statute, which are meant to be read independently. As the CCRI brief explains, 230(c)(1) states that merely providing access to third-party content (e.g., YouTube hosting a video uploaded by a user) does not make the ICSP a “publisher” or “speaker.” Then, 230(c)(2) states that voluntarily making a good-faith effort to remove objectionable material does not make the ICSP generally liable as a “publisher” of everything it hosts.

“Cases reading Section 230 to have a broader preemptive effect than provided for in (c)(1) and (c)(2) have departed from the statutory text,” states the CCRI brief. It emphasizes the fact that “distributor liability” is envisioned by Section 230(c)(1) where the ICSP has knowledge of the harmful material, and it argues that the function of Section 230(c)(2) is legislatively “parallel” to state Good Samaritan laws written to immunize ordinary citizens against unreasonable liability when we make good-faith efforts to help someone in need of assistance. Prior to these laws, an individual intending to render aid to a stranger could be held liable for inadvertently causing harm, but as the CCRI brief states:

… like state Good Samaritan statutes, Section 230(c)(2) includes important limits to the immunity it provides. First, it does not apply when an ICSP is already under an existing duty to act—i.e., where its action to restrict access to objectionable third-party content is not “voluntary.” Nor does it immunize ICSPs that do nothing to address harm or that contribute to or profit from harm.

Again, this is just common sense grounded in common law that applies everywhere except the internet. If one does not initiate illegal activity but seeks to benefit from that activity, one may be liable for the harm caused. It is inconceivable that Congress ever intended to exempt the multi-billion-dollar internet industry from this longstanding principle. And that’s because it intended no such thing.

It will be interesting to see what amici who file on behalf of Google will argue in this case. Other than the usual panegyrics to the internet, I am curious to see whether, for instance, the EFF will have anything coherent to say in defense of two decades’ worth of textual misreading. Typically, defenders of the status quo reading of Section 230 write about threats to “the internet” as if a lack of immunity automatically results in a finding of liability and damages. But on the contrary, a proper reading of the law simply means that an ICSP cannot so easily dismiss every claim and that the injured party is allowed her day in court to prove whether a platform had, or has, a duty to act. Litigating against tech giants is hardly a fair fight in the first place, and ICSPs neither need nor deserve an unconditional immunity that exists nowhere else in the justice system.