So Much Section 230 Noise, So Little Time to Waste

Just a few years ago, it would have been damn hard to find a random citizen who had even heard of Section 230 of the Communications Decency Act of 1996. Now, this bit of wonky, statutory arcana is a topic buzzing on mainstream news, chirping in the Twitterverse, opining in the blogosphere, and echoing through all those extra dimensions where people gather in cyberspace. 230 will probably be Thanksgiving talk this year—I hope in small gatherings observing safe protocols—but all this attention does not mean that understanding the law, or its real problems, will be greatly improved.

For starters, Section 230 is only on the national radar because it has been politicized in a way that is both preposterous and tragic. The preposterous begins with Donald Trump and several vocal members of the GOP accusing the major platforms of partisan bias and censorship. Consequently, certain Republican Members of Congress have dangled the threat of repealing or amending the immunity from civil litigation that Section 230 currently provides to web platforms.

The political bias allegation is absurd and dangerous because it rests on the presumption that “conservative” now encompasses blatant disinformation, conspiracy theory, and organized hate groups that the major platforms have finally felt obliged to remove or mute. Trump and his most ardent fans endorse these negative forces, which is one reason why so many real conservatives, for the first time in their lives, are voting for the Democratic ticket this year.

As I’ve said before, rescuing intelligent and informed conservatism from the Trump wrecking ball is going to be a hell of a challenge for the GOP. But as part of that unenviable task, the putative leaders of the party’s renaissance could demonstrate some leadership in the §230 dustup by articulating some clear distinctions as to what has truly gone awry with the law, and acknowledge that addressing the legitimate concerns requires bipartisan cooperation. And that brings us to the tragic part.

Congress Should Focus on the Real Harm Being Done

Some of the very real victims of §230 (or more accurately, overbroad interpretation of the statute by the courts) are individual citizens—usually women and girls—who have their lives, careers, and relationships threatened or destroyed by the relatively novel and insidious forms of harassment conducted via online intermediaries.

The most obvious example is commonly referred to as revenge porn, whereby somebody with a gripe (e.g. an ex-boyfriend) is in possession of nude or sexually explicit material that he posts online, including websites specifically designed to host revenge content so that users can engage in an exchange of ideas like, “Yeah, somebody rape that bitch!” This is the kind of depravity §230 was written to prevent, not protect. But more on that below.

Revenge porn is more properly called nonconsensual pornography—first, because revenge is not always the motive, and second because motive does not actually matter. It’s the nonconsensual part that makes the act criminal, and the consequences for many of the victims of this crime do not end at embarrassment. As with all aspects of life in the digital age, what happens in cyberspace has real-world results, and this type of harassment leads to death and rape threats, attempted and actual assaults, job loss and forced relocations, and damaged relationships with friends and family.

It is no exaggeration to say that the psychological effects of one or all of these events can be so traumatic that people have been hounded to suicide by remote control. And with the addition of the technology known as deepfakes, an assailant no longer needs to possess explicit material. With just a photograph of a face, anyone’s sister, daughter, wife, or girlfriend can be seamlessly featured in a pornographic scene, or any other compromising event for which she was never present.

What Section 230 Actually Says …

Too often, Section 230 is described as a blanket immunity from civil liability for online service providers full stop. This is incorrect. Occasionally, it is summarized as immunity from liability for potentially harmful material posted by users. This is correct but only part of the statute. What Section 230 also says is that when a platform exercises editorial control in order to remove or mitigate material that “the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected,” this act of moderation does not inherently render the platform a “publisher” such that it becomes subject to liability in civil litigation for potentially harmful material posted by users.

It’s a mouthful and it’s nuanced, which is why §230 is so often misrepresented. But simply put, the statute known the “Good Samaritan” clause was written in 1996 to encourage platform operators to moderate the aforementioned objectionable material. But for nearly 20 years, the internet industry, with the help of judicial error, has promoted a misreading of the statute to assert that online service providers bear no obligation to moderate anything ever.

Also, this must be stressed again and again:  the absence of the §230 shield does not automatically make a platform liable for harm in a civil litigation. A plaintiff still has to make her case like any other claim. Removal of the shield simply means that the platform cannot instantly, without further consideration by the court, dismiss a claim at summary judgment. This has happened numerous times in circumstances where most reasonable people would find that the victim had the right to pursue justice.

Section 230 Overreach

For years, the internet industry inverted the narrative about Section 230, often citing the liability shield as a reason to continue hosting any material—even material that would be illegal in other contexts—and the courts have almost unanimously agreed that this is the correct interpretation of the statute. Consequently, the job before Congress, or the Supreme Court, is not necessarily to repeal §230, or even to drastically amend it, but to clearly articulate that it was not written to shield harmful conduct.

In an October 13 opinion pursuant to the Supreme Court’s denying cert in a recent §230 case, Justice Thomas explained why he believes the Court should, when the right case is presented, take up the issue of textually incoherent interpretations of the statute. For instance, citing a case from 2003, Thomas writes:

Under this interpretation, a company can solicit thou­sands of potentially defamatory statements, “selec[t] and edi[t] . . . for publication” several of those statements, add commentary, and then feature the final product promi­nently over other submissions—all while enjoying immun­ity. (Citations omitted)

One could, for political purposes, apply this opinion to criticize Twitter for placing a warning label on a presidential tweet that contains hazardous misinformation about, say, a deadly virus. And that is more or less where some Republican Members of Congress have tried to lead this discussion—that even a public safety editorial decision made by a social platform should void its immunity. But this would be wholly inconsistent with Congress’s intent in 1996 and a grossly negligent failure to serve those parties who suffer real harm from the courts’ misinterpretations as described by Justice Thomas.

Instead, Justice Thomas’s observation should be applied where platform operators either intentionally, negligently, or through willful blindness, traffic in content that is clearly designed to cause harm though libel, nonconsensual pornography, organized harassment, or (yes) misinformation that poses a danger to the public. I know that last one is prickly at the moment, but we used to be generally on the same side in such matters and will need to get there again, or Section 230 will be the least of our worries.

Hoist by Their Own Petard

The internet industry spent a lot of PR capital entangling 230 with misstatements about its obligations under the First Amendment to leave all content alone, which was and remains constitutional hogwash. Thus, to a great extent, the platforms’ own rhetoric has played into those members of the GOP who now accuse them of censorship. For years, the industry and its network of “digital rights” activists—the EFF, Techdirt, the ACLU, Public Knowledge, et al—cried censorship at every argument for moderation of even the worst material. And the public, regardless of politics, largely accepted this narrative based on the fallacy that more speech is the antidote to bad speech.

For nearly two decades, it was easy for the platforms to sweep a million sins under the “free speech” rug until the moment those sins crept into the realm of public policy. Trump becomes President, and suddenly, online content that any reasonable person could find objectionable under the textual meaning of 230 was being posted as official statements by the highest office in government. And presently, more than any other issue, the White House’s irrational conflict with infectious disease experts in the middle of a pandemic highlights the nature of the problem the platforms were creating for themselves—and for all of us.

I sincerely hope, in the broadest sense, for a return to normal in this country. I do not expect to see a Republican Reign of Terror at the polls, though I do think the party has some soul searching to do, and a timeout wouldn’t hurt. But most of the Section 230 noise being made by that party is just another side show in a carnival that many of its own members are sick of attending. And it’s a damn shame because there are real Americans, some of them fourteen-year-old girls, who could use a little help from a legislature acting in good faith.

I hope the next generation of conservative leaders will join their colleagues across the aisle and agree that Congress never intended for the “Good Samaritan” clause to shield harmful parties and their abettors from remedies pursued by the victims. We might all remember that the middle word in the CDA is Decency—a virtue the internet seems remarkably effective at destroying.

Section 230 Review: Right Topic, Wrong Administration

I think Senator Blumenthal summed it up about right, as he was quoted in this week in the Wall Street Journal:

“I’ve certainly been one of Congress’ loudest critics of Section 230, but I have no interest in being an agent of Bill Barr’s speech police.”

In the post I wrote right after Trump threw a hissy fit because Twitter fact-checked him, I said that I have been worried about the platform responsibility narrative becoming grossly distorted by the nature of this administration. It’s no surprise that the laissez-faire policies of the major platforms, with regard to content moderation, were set on a collision course with America’s new reality-bending president.

As Trump’s unbridled contempt for facts, his tacit endorsements of hate groups, and his violations of core American principles morphed into official policy, it was inevitable that there would be a clash of conscience for at least some of Silicon Valley’s leaders and employees. They should have seen it coming but chose not to.

Instead, high on their own utopian, guardians-of-democracy rhetoric, and insulated by liability shields like 230, Big Tech refused even to consider how their grand experiment in speech absolutism, and the wisdom of crowds, might create a monster. So, when the beast finally broke out of the lab, they should hardly have been surprised that their futile efforts to contain it would only make it angry.

Of course Trump is demanding platform neutrality. Neutral is exactly what the platforms kept saying they were for years. Silicon Valley wants the platform liability shields left just as they are. And in defense of that status quo, they have long claimed, and largely maintained, a policy of “neutrality” with regard to user uploaded content. But this assertion, already dubious, became both untenable and dicey for Big Tech operators, when the worst abuser of their community standards became the federal government itself.

But let’s be honest. Most Americans, left, right, and center, agreed that neutral was the right gear for Facebook and Twitter et al. Never mind that neutrality is not the aim of Section 230. That’s just a pesky little detail about the law itself. But for years, Big Tech used the protection of 230 to justify “neutrality” and to evangelize that policy as allegedly protecting our speech rights. So, the maddening irony of the moment is that Trump is merely insisting upon the internet that everyone naively said they wanted — and many still say they want, even as the Republic seems to hang by a thread most days. So, stick that in your bong and burble it for a while.

There is no other way to frame the so-called political party conversation now. Anyone with a basic working knowledge of American civics and history knows that the current administration is neither Republican nor conservative by any reasonable definition of either of those terms. When Trump and his flock complain about “censorship of conservative views,” online, what they are referring to is moderation of potentially hazardous lies, conspiracy theories, incitements of violence, and hate speech. If those modalities are truly part of the Republican party’s new brand, we’re going to have a civil war of some kind, at which point there will be no need to worry about nuanced legislation like Section 230.

But as I said before, that’s a problem the Republicans will have to work out for themselves. They’ll have to decide, and soon, whether they are all-in on this cult of galloping ignorance, incompetence, and cruelty. Meanwhile, I see legitimate conservatives tweeting several times a day—and quite often to criticize Trump for what he has done to their party. Bill Kristol’s tweets aren’t being taken down, and last I checked, he’s pretty damned conservative.

In the meantime, what will unfortunately be obscured by all this noise are the very serious reasons why reasonable people of good faith seek amendment to the Section 230 liability shield. These include people like attorney Carrie Goldberg, whose Brooklyn law firm defends real victims of online exploitation and severe harassment, while the platforms enabling those crimes (even intentionally) remain shielded by Section 230. This is the kind of policy conversation we are supposed to be having. And we were having it, until Trump got involved.

It is a tragic irony that whiny old men put Section 230 on the table for their whiny old man purposes, when so many of the real citizens seeking reform happen to be (as usual) vulnerable women. For instance, after years of reasoned debate on ways to address revenge porn online, Senator Hawley introduced an amended bill, dated today, that does nothing for people who have suffered, or may suffer, real harm from the misapplication of Section 230. Instead, the Hawley bill is merely a reaction to claims of “politically biased moderation,” which is a euphemism for removing the toxic, conspiracy-laden bullshit spread by the current president. Because that’s where we are now.

Because many (if not all) of these new, reactionary proposals for 230 revision will seek to punish Silicon Valley for moderating the worst of Trump (and that’s saying something), it seems unlikely that any such legislation will make it through this Congress before the end of the year. By that time, if there is any hope left for America, this national nightmare will end, and the historians can get to work on the Bruegel-inspired pop-up books describing this era.

I have yet to review the June 2020 DOJ report on Section 230, and because that review started before this topic floated all the way onto Trump’s radar, it may contain some reasonable recommendations that go beyond political theater. We’ll see. But now that the 230 conversation has been subsumed by Trump’s personal beef with Silicon Valley, it’s just another side show in the circus. It would be nice if one day, sober heads can resume this important conversation. Now, all we need are some sober heads.

See also: Civil rights groups call for ‘pause’ on Facebook ads.

What Happens When the Biggest Troll on Twitter is the President?

This week, as Twitter CEO Jack Dorsey emerges a champion of truth in a world of truthiness, we must not lose sight of the fact that the folly of conflating the speech right with social media platforms has played a major role in leading us to this absurd moment of conflict between Trump and Twitter.

By now, almost everyone is aware that Dorsey took responsibility for Twitter fact-checking a couple of Trump’s tweets about mail-in ballots. The tweets were not taken down, mind you, but flagged as untrue because, well, they’re not true. In response, the president cried “censorship,” echoed accusations of “liberal bias” in Silicon Valley, and by end of business yesterday, signed an Executive Order putting platforms on notice that their liability shield under Section 230 of the Communications Decency Act (1996) may be vitiated due to their alleged partisan nepotism.

I’ll get to the EO in a moment. But what I fear Trump may have just done is to give Big Tech an effective talking point to use in opposition to legitimate and measured proposals to amend Section 230—proposals that have been in discussion since before the election of 2016. Suffice to say, the internet industry likes its liability shields (both 230 of the CDA and 512 of the DMCA) just the way they are, and the major platforms will fight for the status quo with everything they’ve got. Now, one thing they’ve got is an opportunity to run headlines and memes shouting Don’t Let Trump Destroy the Internet! Or variations on same.

I figured it might come to this. About five minutes after the election of 2016, one could imagine that the already complicated debate about platform responsibility was going to be further muddied by fact that the president uses platforms like Twitter to make false statements and to commit acts of libel and harassment. Trump’s complete disregard for statesmanship, the truth, or the rule of law are all assets in the wilds of social media, where doxing, mob-harassment, and threats have silenced the speech of individuals with far less armor than a President of the United States.

Platform operators, who have historically been oriented toward leaving everything online, today find themselves in the unprecedented position of hosting some pretty crazy shit written by the highest elected official in the nation. At a certain point, it has to feel irresponsible not to put a warning label on an official announcement that happens to be false. At the same time, we might just as reasonably shrug at Twitter’s decision as give Dorsey a high five for it. As a practical matter, the majority of Americans do not believe anything Trump says, and only some portion of his secure voting base believes everything he says. So, Twitter’s decision may be somewhat moot, as it is a relatively small gesture in the scheme of things.

The Executive Order signed yesterday is political theater with an ironic twist. On the one hand, the order’s animating principle (i.e. threat) is predicated on a misstatement of how Section 230 actually works. It alleges that in order to remain shielded from civil liabilities stemming from users’ content, the service provider must be a neutral party—i.e. keep mitts off all user content. But that’s exactly the opposite of what Section 230 says. The section known as the “Good Samaritan Clause” was written expressly to encourage sites to engage in…

“… any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected;”

Note how broad that language is. The provider is urged to make judgment calls and to decide, for instance, what content is “otherwise objectionable.” And of course this is how 230 would have to work because the First Amendment prohibits the government from determining what community standards a platform may establish for its use. The EO erroneously alleges that because social sites are biased against “conservative” politics, and engage in muting one party’s viewpoints, this invalidates their “neutrality,” which abrogates the 230 protection.

The problems with the EO are that the bias allegation itself is without merit, and the legal theory is a misrepresentation of Section 230. In addition to the sane person’s observation that misstatements of fact should not be called “conservative” just because they come from Donald Trump, “there is no empirical basis for the claim that conservative viewpoints are being suppressed on social media,” according to a recent paper by scholars Mary Anne Franks and Danielle Keats Citron. As part of their discussion about improperly conflating the speech right with Section 230, Franks and Citron note that an independent audit was led by former Senator Jon Kyl and found no evidence supporting this allegation.  

What we do as a nation with the fact that “conservatism” has devolved to the circus of Trumpism is an existential problem; but as a cyber-policy matter, what this little bruhaha may have done is to further complicate a fledgling discussion (and a bipartisan one) about meaningful Section 230 reform. Because the ironic twist I mentioned above is that the misread of Section 230 applied in the EO echoes the same rhetoric that has been used for years by the internet industry in order to justify its laissez-faire approach to platform stewardship.

The major internet platforms, with substantial help from “digital rights” organizations like the EFF, have done an exceptional PR job—invoking both Section 230 and the First Amendment (and improperly conflating the two)—in order to sell the message that social platforms are like steroids for the speech right. And until 2016, most people across the political spectrum seemed to buy that claim, even though it was legally and constitutionally unfounded.

It would be impossible to calculate the number of editorials and amicus briefs written to denounce the removal or demotion of so much as a syllable of “speech” online, and the platforms have generally supported this view because it’s good for business. More content means more traffic and more data to mine. It is only in recent years that some members of Silicon Valley’s leadership have revealed a moral reluctance to host everything—even if it’s harmful—under the bogus claim that they are defending speech.

Meanwhile the victims of some of the worst conduct online, like non-consensual pornography and other forms of harassment, have seen the courts overbroadly interpret Section 230 in ways anathema to anything Congress intended in 1996. The internet industry, along with briefs filed by organizations like EFF and ACLU, has invoked 230 as grounds to avoid removing even non-consensual pornography, which could not be more absurd given the anti-obscenity origins of the legislation. Thus, it is only recently, thanks in large part to women like Franks, Citron, and attorney Carrie Goldberg, that both parties in Congress have finally undertaken review of Section 230 for possible legislative fixes to address these unintended consequences.

As such, it does not strike me as very helpful to the purpose of sober review that Section 230 has been brought into the foreground by this latest presidential outburst. The EO itself may be a worthless piece of paper Trump signed to make himself and a few of his fans feel good, but now that he’s stamped his brand of partisanship on this narrative, one can imagine any number of ways this non-partisan discussion can become needlessly mired in the muck. As mentioned, I can certainly imagine the industry using this story as leverage to stymie legitimate review.

Of course, the maddening irony of this dustup is that all the speech extremism of the last 10-15 years, combined with misrepresentations of Section 230, is a big part of how we managed to raze the landscape of reality so that someone with absolutely no moral compass could become President of the United States. It may have taken this shock to the system for people to finally want platform stewardship like fact-checking and enforcement of community standards, but the dark irony of the EO is that it isn’t all that different from the rhetoric tech-utopians have been using for years.