Among the briefs filed in Gonzalez v. Google asking the Supreme Court to properly read Section 230 of the Communications Decency Act is one filed by Sen. Ted Cruz, Rep. Mike Johnson, and fifteen other Republican Members of Congress. Presenting similar textual arguments as the brief filed by Cyber Civil Rights Initiative (CCRI), highlighted here in a recent post, Sen. Cruz et al. petition the Court to address a matter that has nothing to do with Section 230—a politically motivated complaint summarized as follows:
Confident in their ability to dodge liability, platforms have not been shy about restricting access and removing content based on the politics of the speaker, an issue that has persistently arisen as Big Tech companies censor and remove content espousing conservative political views, despite the lack of immunity for such actions in the text of §230.
Allegations of viewpoint bias are inaptly raised in Gonzalez, or indeed any case addressing Section 230. Even if it could be shown that a social platform actively engages in true political bias (i.e., moderating ideas and speakers rather than extremism), this is not a Section 230 issue because neither political nor any other form of bias necessarily implicates civil liability for an online platform any more than it does for a newspaper or TV network.
The First Amendment protects bias, and Section 230 does not alter this fact. Hence, the Cruz brief strays far from the purpose of the Court’s review in Gonzalez by erroneously implying that bias is inherently grounds for litigation when it alleges that the overbroad interpretation of Section 230 immunity causes or sustains politically motivated censorship. But Section 230 is not and never has been a viewpoint neutrality law. Cruz et al. are asking the Court for a misreading that has lived in the PR of the platforms and the rhetoric of tech-utopianism, but is nowhere in the statute.
Specifically, the Cruz brief alleges that the platforms have been shielded in censorious conduct by a poor statutory reading of their right to “good faith” removal of material that is “otherwise objectionable.” The amici argue that those words must be read in balance with the preceding words in the statute providing immunity where platforms remove or restrict access to third-party content that is “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.” The brief then asserts (a bit wryly) that “…conservative viewpoints on social and political matters do not rise to the level of being ‘obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable.’”
Notably, the brief both elides a definition of “conservative” in context to its argument and asks the Court to read Section 230 as a mandate that platforms leave all material online that does not meet a very narrow, statutory definition of “objectionable.” This is false. Section 230 was written to encourage platforms to adopt and enforce their own community standards (i.e., decide what is objectionable), which does not disturb the general right to host a platform which may be politically biased in any direction. A proper reading of 230 simply means that platforms shall not be unconditionally immunized against potential liability for hosting content that results in some form of harm which may be remedied through civil litigation.
The Cruz brief does not distinguish between amici’s political bona fides and the broad spectrum of hate-speech and violence-inciting material that some Americans now call “conservative,” and which is indeed problematic for platforms. For instance, as the Alex Jones verdicts or the January 6th convictions make clear, material that certain people are willing to label “conservative” may, as a matter of law, be libel, defamation, or disinformation that results in individuals being harassed and threatened, or which leads to violence—or even an insurrection. And it is precisely in this context (i.e., blurring the line between political views and actionable conduct) that the complaint in the Cruz brief is so inaptly raised in Gonzalez.
Petitioner Gonzalez alleges that Google’s “recommendation” algorithms contributed to fostering terrorist activity by promoting ISIS recruiting videos in a manner that predictably roused a latent terrorist, who then acted on those emotional triggers. Regardless of whether that complaint prevails in context to the anti-terrorism statutes at issue, the general allegation about the platform’s role in Gonzalez is indistinguishable from an algorithm detecting that a user likes InfoWars and, therefore, “recommends” QAnon videos or some other tinfoil-hat material with the foreseeable result that some domestic terrorist will assault a family in Sandy Hook, ransack the Capitol, conspire to kidnap a sitting governor, etc.
Thus, if the Court agrees that Google is not shielded from litigation in Gonzalez, the allegations of liability presented, even if they do not prevail in that instance, are directly analogous to an Alex Jones or an election-denier issue for a platform moderation team. Even if Google is ultimately not found to be liable for the ISIS-related killing of Nohemi Gonzalez, allowing the case to proceed past the Section 230 veil will demonstrate that there is a plausible, common-sense nexus between amplification of certain material and harmful conduct.
Under a correct reading of 230 (i.e., no unconditional immunity for platforms), the platforms may be more effective in addressing inciting material—a goal that should have bipartisan support from lawmakers interested in both a proper reading of the statute and the general welfare of the nation. Unfortunately, this political monkey wrench in the Section 230 issue is part of a broader narrative in which social platforms have allegedly tilted the scales—but in favor of extreme right-wing material calling itself “conservative.” For instance, in February 2021, BuzzFeed reported:
Internal documents obtained by BuzzFeed News and interviews with 14 current and former employees show how the company’s policy team — guided by [Republican lobbyist and conspiracy promoter] Joel Kaplan, the vice president of global public policy, and Zuckerberg’s whims — has exerted outsize influence while obstructing content moderation decisions, stymieing product rollouts, and intervening on behalf of popular conservative figures who have violated Facebook’s rules.
This and other reports, including testimony before Congress, reveal a pattern of (if anything) pro right-wing bias at Facebook and other platforms, including evidence that the “anti-conservative” story itself is a fiction promoted by individuals like Kaplan. More specifically, Zuckerberg’s apparent resistance to remove Alex Jones from the platform demonstrates how the chronic misreading of Section 230 would only benefit a Trumpianized GOP that embraces every extremist willing to wear a red hat.
A correct reading of 230 opens the possibility that Facebook could be liable for hosting or “recommending” InfoWars, while an incorrect reading forecloses that possibility at summary judgment. Only one of these interpretations benefits those elements of the GOP who choose to align themselves more closely with that brand of “conservatism.” Thus, consistent with the Trumpian tactic of weaponizing alleged victimhood, the comparatively mild complaint of viewpoint bias in the Cruz brief is political theater—blaming social media platforms for actions that a) they have not taken; b) they have a constitutional right to take, if they want to; c) are unrelated to Section 230 immunity; and d) detract from an important legal question for real victims barred from pursuing relief by misreading the statute.
Cruz and his fellow amici have heard or read testimony from witnesses like whistleblower Frances Haugen, who explained to the Senate Commerce Committee how Facebook consistently put profits ahead of safety, adding, “The result has been a system that amplifies division, extremism, and polarization — and undermining societies around the world. In some cases, this dangerous online talk has led to actual violence that harms and even kills people.” Specifically, Haugen and other former insiders have repeated the theme that extremism has been good for social platforms—that angry users are active users, and active users translates to profit for these companies.
A proper reading of Section 230 will not solve every problem fostered by social platforms, but it can have the effect of forcing platform operators to identify when speech is reasonably linked to harmful conduct and to acknowledge a nexus between addictive algorithm design and illegal activity—from terrorism to “revenge porn.” Very real harms have been shielded and exacerbated by misreading Section 230, and it is this error of law which the Court should resolve. In the process, it should decline to address the subject of viewpoint neutrality as the inappropriate, political side show it is.
I agree completely with your argument. However, I want to pass along an “alternate reality” I caught wind of last week during a presentation by a noxious gasbag with more hairplugs than Elon Musk himself. One Judge Andrew Napolitano.
I don’t read in your argument anything to address their First Amendment loophole. Is it feasible? I don’t know, but I’m certain it’s the type of Machiavellian scheme dreamt up by those whose hands are greased with lobbying lard. The Judge’s argument was something like this:
When it can be proven that a private platform is in partnership with the government (he did that stupid gesture of locking his hands together by interlacing his fingers) they cease to be immune from First Amendment standards, and must allow for unfettered free speech.
I’ve heard Musk argue the same thing over at Twitter, while everyone knows the real motivation is because it’s more profitable to run a platform into the ground by doing away with content
moderation altogether, treating all users as collateral damage, Libertarian 101.
It seems so baldfaced and blatant of a kluge that I can’t believe they would really think it’s legit, but it may be that the irrationality of the idea is perfectly logical if the real political motivation is to bring the gears of government to a grinding halt
Thanks, Michelle. I am not familiar with Napolitano’s statement or in what context he said it, but as you have described it, it sounds like nonsense. How do we define “in partnership with the government?” How many private entities with government contracts lose their 1A rights? I would imagine the answer to that is none, notwithstanding keeping mum about classified information. As we know, conflating the speech right with the business models of the platforms began with the platforms themselves, and to the extent they may be hoist by their own petard, so be it. But I would rather not see regular folks, real victims of cybercrime, and the Republic itself hoist with them. 🙂