Just a few years ago, it would have been damn hard to find a random citizen who had even heard of Section 230 of the Communications Decency Act of 1996. Now, this bit of wonky, statutory arcana is a topic buzzing on mainstream news, chirping in the Twitterverse, opining in the blogosphere, and echoing through all those extra dimensions where people gather in cyberspace. 230 will probably be Thanksgiving talk this year—I hope in small gatherings observing safe protocols—but all this attention does not mean that understanding the law, or its real problems, will be greatly improved.
For starters, Section 230 is only on the national radar because it has been politicized in a way that is both preposterous and tragic. The preposterous begins with Donald Trump and several vocal members of the GOP accusing the major platforms of partisan bias and censorship. Consequently, certain Republican Members of Congress have dangled the threat of repealing or amending the immunity from civil litigation that Section 230 currently provides to web platforms.
The political bias allegation is absurd and dangerous because it rests on the presumption that “conservative” now encompasses blatant disinformation, conspiracy theory, and organized hate groups that the major platforms have finally felt obliged to remove or mute. Trump and his most ardent fans endorse these negative forces, which is one reason why so many real conservatives, for the first time in their lives, are voting for the Democratic ticket this year.
As I’ve said before, rescuing intelligent and informed conservatism from the Trump wrecking ball is going to be a hell of a challenge for the GOP. But as part of that unenviable task, the putative leaders of the party’s renaissance could demonstrate some leadership in the §230 dustup by articulating some clear distinctions as to what has truly gone awry with the law, and acknowledge that addressing the legitimate concerns requires bipartisan cooperation. And that brings us to the tragic part.
Congress Should Focus on the Real Harm Being Done
Some of the very real victims of §230 (or more accurately, overbroad interpretation of the statute by the courts) are individual citizens—usually women and girls—who have their lives, careers, and relationships threatened or destroyed by the relatively novel and insidious forms of harassment conducted via online intermediaries.
The most obvious example is commonly referred to as revenge porn, whereby somebody with a gripe (e.g. an ex-boyfriend) is in possession of nude or sexually explicit material that he posts online, including websites specifically designed to host revenge content so that users can engage in an exchange of ideas like, “Yeah, somebody rape that bitch!” This is the kind of depravity §230 was written to prevent, not protect. But more on that below.
Revenge porn is more properly called nonconsensual pornography—first, because revenge is not always the motive, and second because motive does not actually matter. It’s the nonconsensual part that makes the act criminal, and the consequences for many of the victims of this crime do not end at embarrassment. As with all aspects of life in the digital age, what happens in cyberspace has real-world results, and this type of harassment leads to death and rape threats, attempted and actual assaults, job loss and forced relocations, and damaged relationships with friends and family.
It is no exaggeration to say that the psychological effects of one or all of these events can be so traumatic that people have been hounded to suicide by remote control. And with the addition of the technology known as deepfakes, an assailant no longer needs to possess explicit material. With just a photograph of a face, anyone’s sister, daughter, wife, or girlfriend can be seamlessly featured in a pornographic scene, or any other compromising event for which she was never present.
What Section 230 Actually Says …
Too often, Section 230 is described as a blanket immunity from civil liability for online service providers full stop. This is incorrect. Occasionally, it is summarized as immunity from liability for potentially harmful material posted by users. This is correct but only part of the statute. What Section 230 also says is that when a platform exercises editorial control in order to remove or mitigate material that “the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected,” this act of moderation does not inherently render the platform a “publisher” such that it becomes subject to liability in civil litigation for potentially harmful material posted by users.
It’s a mouthful and it’s nuanced, which is why §230 is so often misrepresented. But simply put, the statute known the “Good Samaritan” clause was written in 1996 to encourage platform operators to moderate the aforementioned objectionable material. But for nearly 20 years, the internet industry, with the help of judicial error, has promoted a misreading of the statute to assert that online service providers bear no obligation to moderate anything ever.
Also, this must be stressed again and again: the absence of the §230 shield does not automatically make a platform liable for harm in a civil litigation. A plaintiff still has to make her case like any other claim. Removal of the shield simply means that the platform cannot instantly, without further consideration by the court, dismiss a claim at summary judgment. This has happened numerous times in circumstances where most reasonable people would find that the victim had the right to pursue justice.
Section 230 Overreach
For years, the internet industry inverted the narrative about Section 230, often citing the liability shield as a reason to continue hosting any material—even material that would be illegal in other contexts—and the courts have almost unanimously agreed that this is the correct interpretation of the statute. Consequently, the job before Congress, or the Supreme Court, is not necessarily to repeal §230, or even to drastically amend it, but to clearly articulate that it was not written to shield harmful conduct.
In an October 13 opinion pursuant to the Supreme Court’s denying cert in a recent §230 case, Justice Thomas explained why he believes the Court should, when the right case is presented, take up the issue of textually incoherent interpretations of the statute. For instance, citing a case from 2003, Thomas writes:
Under this interpretation, a company can solicit thousands of potentially defamatory statements, “selec[t] and edi[t] . . . for publication” several of those statements, add commentary, and then feature the final product prominently over other submissions—all while enjoying immunity. (Citations omitted)
One could, for political purposes, apply this opinion to criticize Twitter for placing a warning label on a presidential tweet that contains hazardous misinformation about, say, a deadly virus. And that is more or less where some Republican Members of Congress have tried to lead this discussion—that even a public safety editorial decision made by a social platform should void its immunity. But this would be wholly inconsistent with Congress’s intent in 1996 and a grossly negligent failure to serve those parties who suffer real harm from the courts’ misinterpretations as described by Justice Thomas.
Instead, Justice Thomas’s observation should be applied where platform operators either intentionally, negligently, or through willful blindness, traffic in content that is clearly designed to cause harm though libel, nonconsensual pornography, organized harassment, or (yes) misinformation that poses a danger to the public. I know that last one is prickly at the moment, but we used to be generally on the same side in such matters and will need to get there again, or Section 230 will be the least of our worries.
Hoist by Their Own Petard
The internet industry spent a lot of PR capital entangling 230 with misstatements about its obligations under the First Amendment to leave all content alone, which was and remains constitutional hogwash. Thus, to a great extent, the platforms’ own rhetoric has played into those members of the GOP who now accuse them of censorship. For years, the industry and its network of “digital rights” activists—the EFF, Techdirt, the ACLU, Public Knowledge, et al—cried censorship at every argument for moderation of even the worst material. And the public, regardless of politics, largely accepted this narrative based on the fallacy that more speech is the antidote to bad speech.
For nearly two decades, it was easy for the platforms to sweep a million sins under the “free speech” rug until the moment those sins crept into the realm of public policy. Trump becomes President, and suddenly, online content that any reasonable person could find objectionable under the textual meaning of 230 was being posted as official statements by the highest office in government. And presently, more than any other issue, the White House’s irrational conflict with infectious disease experts in the middle of a pandemic highlights the nature of the problem the platforms were creating for themselves—and for all of us.
I sincerely hope, in the broadest sense, for a return to normal in this country. I do not expect to see a Republican Reign of Terror at the polls, though I do think the party has some soul searching to do, and a timeout wouldn’t hurt. But most of the Section 230 noise being made by that party is just another side show in a carnival that many of its own members are sick of attending. And it’s a damn shame because there are real Americans, some of them fourteen-year-old girls, who could use a little help from a legislature acting in good faith.
I hope the next generation of conservative leaders will join their colleagues across the aisle and agree that Congress never intended for the “Good Samaritan” clause to shield harmful parties and their abettors from remedies pursued by the victims. We might all remember that the middle word in the CDA is Decency—a virtue the internet seems remarkably effective at destroying.
Leave a Reply