Section 230: Fix It or Lose It?

In 2015, Rep. Jackie Speier (D-CA) introduced a bill that would make it a federal crime to engage in what is generically called “revenge porn.”  I say generically because “revenge” alludes to a specific motive, usually that of a disgruntled ex-boyfriend who decides to get back at a former girlfriend by distributing intimate or sexually explicit images of her online.  There are revenge porn websites dedicated to hosting this type of material, and in some cases, site operators have engaged in extortion, demanding money from victims in exchange for removing their images. 

Naturally, the usual suspects responded to Rep. Speier’s proposal with the usual hand-wringing jitters, asserting that any implication of platform responsibility for almost anything will only lead to eroding the proper functioning of the internet.  (Is it functioning properly?)  As quoted in my 2015 post, Mike Masnick at Techdirt stated, regarding the Speier bill, “Trying to accurately describe what ‘revenge porn’ is for the sake of criminalizing its posting, will almost certainly have chilling effects on third parties and undermine the very intent of the CDA’s Section 230.”  [Emphasis added]

But legislation like this does not undermine the intent of Section 230 of the Communications Decency Act, and saying otherwise grossly misrepresents—in fact inverts—the goal of that liability shield when it was written into law in 1996.  Section 230 of the CDA was specifically enacted to encourage content moderation by platform owners to remove unlawful or harmful material.  Unfortunately, this “safe harbor” provision has since been reimagined by the internet industry, web activists, and some jurists as a legal foundation to avoid content moderation—even when ordered to do so by a court of law.  That is an aberration of what CDA230 was meant to achieve.

To date, forty-one states plus Washington D.C. have criminalized non-consensual use of sexually explicit or intimate visual material, and New York is poised to join this company with a new bill now proceeding through the State Assembly.  Notably, the language in this bill (similar to Speier’s federal proposal) suggests to me that identifying the criminality of this particular conduct is not so far outside the scope of legislative capacity as Masnick implied in 2015.  The New York bill states, “…with intent to cause material harm to the emotional, financial or physical welfare of another person …”  That doesn’t seem very complicated.  If the goal is to hurt someone, regardless of why, then criminal conduct may be present.  

Of course, the tech pundits don’t really mind criminalizing the behavior of the individuals who commit “revenge porn.”  I won’t accuse Masnick, the EFF, et al of supporting the people who engage in this type of conduct because they certainly do not.  What they do claim to be concerned about are the broader implications for internet platforms if they can be held liable under the criminal code, or even just directed by court order to remove material as a form of injunctive relief for victims.  Here, the critics rely on the well-worn generality that any gap in the great wall of Section 230 will only result in reactionary responses by well-meaning web platforms, which will then censor otherwise protected speech.  

Maybe I lack imagination, but it is actually impossible to fathom how providing a relatively narrow path to legal remedies for the victims of this singular crime can chill anything related to the normal functioning of most online activity. Someone wins a revenge porn case, and what?  We won’t be able to read the New York Times or buy sneakers on Amazon or watch Hulu?  Bullshit.  

As usual, the pundits tend to overlook the fact that due process is still required—that an alleged victim still has to prove her case and demonstrate how a named platform may be criminally or civilly liable for harm.  And in many cases, a platform may be responsible for nothing more than removing content without facing any further liability whatsoever.  Meanwhile, people have already been held criminally and civilly liable for various types of revenge porn, and material has been removed from various site, and the internet is still functioning.  In fact, one audacious law firm in Brooklyn, NY focuses on exactly these issues under the direction of attorney Carrie A. Goldberg, who says she became the lawyer she needed herself after an ex-boyfriend threatened to post naked pictures of her online.  

Since then, Goldberg’s firm has removed over 20,000 images on behalf of its clients—a number that simultaneously demonstrates there is efficacy in criminalizing non-consensual uses of material, but it also proves my point about due process and the narrowness of this focus.  In short, the socially-beneficial aspects of the internet really can endure the removal of many thousands of illegal or harmful files without the rest of us feeling a thing, and it is preposterous to believe otherwise.  Or as part of Goldberg’s Twitter bio explains:  F*uck your overbroad reading of CDA230.

I liked that slogan so much, I asked her for coffee mug with the words printed on it.  I guess it’s just the kind of nerd-moxie that makes my day, and Golberg’s firm certainly appears to have moxie to spare, as exemplified by this declaration on their website …

We are done living in a world of abuse and we are not afraid to sue the *&%$ out of schools, tech companies, and employers who tolerate it.  There are many ways to get justice for our clients – economic justice, restraining orders, advocacy in Campus Disciplinary proceedings, exposing a predator, getting the piece-of-shit thrown in jail.

To the extent this take-no-prisoners attitude accurately sums up a general shift in public sentiment (i.e. that some form of platform responsibility is mandated), I suspect the whinging chorus of internet activists may soon need to find a new cross to die on other than their adamantine devotion to the sanctity of Section 230.  In fact, it is conceivable that if the tech giants do not get on board and help tweak—or at least don’t stand in the way of tweaking—the application of this liability shield, they just might lose it altogether.

Apropos my last post about the implications of deepfakes, this universe of criminal conduct will likely become more complicated as parties willing to cause harm can more easily manufacture visual material that appears to reveal the intended target(s) engaged in embarrassing, or even illegal, activity.  For instance, most, if not all, of the revenge porn statutes criminalize visual material that depicts the “intimate parts” of the plaintiff bringing a claim, and this language would seem to fall short of criminalizing a deepfake in which the victim’s face has been seamlessly grafted onto someone else’s body.  Hence, the criminal codes may already be lagging behind the technology.

And, of course, the implications here are much broader than non-consensual pornography.  Just look at the consequences (in this case almost certainly deserved) for Virginia Governor Ralph Northam over a 1984 yearbook in which he appears at least adjacent to, if not depicted in, racist and demeaning photographs. Those photos are real, and Northam must deal with the consequences, but we are now well past the point when far more sophisticated imagery than yearbook photos can be fabricated out of thin air by someone with rudimentary skills.  Combine the level of destruction that can be so easily achieved with precedent application of Section 230 (e.g. Yelp refusing to remove a handful of libelous reviews), and it seems to me that change is coming, and the big platforms may want to get on board.  

As I posted last July, a new development in this narrative—and one I consider unfortunate—is the addition of partisan politics to the mix.  Some conservative Republicans in Congress have at least hinted at eradicating Section 230 in response to allegations that web platforms promote left-leaning content over right-leaning content.  Clearly, this specific complaint implies a tangled mess of a debate that nobody should want; but if the legislative Venn diagram encompasses those who want to kill 230 with those who want to carve out reasonable remedies for online harm, Google and Co. may need to change their revisionist narrative on the purpose of that liability shield, or risk losing more than symbolic battles.    

A New Unfortunate Twist in the Section 230 Saga: Politics

This blog contains several posts questioning the premise that the ISP liability shield known as Section 230 of the Communications Decency Act of 1996 is a “sacred” law, without which the internet would cease to do all the wonderful things it does. Like foster a more rational, diplomatic, and thoughtful political climate around the world. (Because that’s going so well.)

What began as an incentive for platform responsibility became the legal basis for a lot of platform irresponsibility over the past two decades. In this regard, I and others have pointed to specific instances of tangible harm done via the web and the general lack of cooperation by platform owners to remove, demote, or delist harmful content—even when ordered by a court to do so.

Most recently, the Hassell v. Bird case (see posts here and here), illustrates how far the judicial application of the Section 230 liability shield has strayed from its intent. In fact, it is so extreme that one part of Yelp’s defense in this case boils down to the following logic: Although Section 230 was created to spare platforms undue litigation, because Yelp was not a named party in Hassell’s complaint, they argued denial of due process because they weren’t given the chance to litigate. To underline this point, the action a California court had ordered—to delete content held to be defamatory—would have cost the company nothing, which is consistent with the original purpose of Section 230.

In July 2017, I summarized an academic paper describing specific ways in which Section 230 has had the unintended consequence of shielding some very bad actors online; and the authors of that paper even recommended subtle tweaks to the statute which might mitigate the kind of harm being done, including language that prefigures FOSTA, signed in April of this year.

As that amendment exemplifies, Section 230 was never intended to foreclose all possibility of civil or criminal remedy merely because harmful conduct occurs in cyberspace. At the same time, while criticizing the absolutism of 230, I also recognize the difficulty inherent to the unprecedented paradigm of social media—that these are privately-owned, public spaces with the sole purpose of hosting users’ speech.

Unfortunately, any such nuanced discussion has historically been overwhelmed by the industry and its well-funded “activists,” who claim that the status quo of the 230 statute is “sacred,” that even the slightest adjustment will undermine the core functioning of the internet and threaten speech online. And “sacred” is exactly what Issie Lapowski called the statute in her recent article in Wired, which claims that lawmakers don’t understand the nature of the statute they’re threatening to “gut.” While many of us clearly do not agree that 230 is quite so inviolate, Lapowski’s article points to a new twist in this tale that can only add a new layer of confusion to an already complex issue: partisan politics.

Back in November of 2017, hearings at the House Judiciary Committee were generally bipartisan in tone, investigating the manner in which Russian agents bought American political ads on social platforms. Both Democrats and Republicans specifically recommended that the representatives of Google, Facebook, and Twitter drop the longstanding rhetoric that they operate “neutral platforms” on which they bear no responsibility for the content posted by users. All three representatives had little choice other than to concede, in testimony anyway, that their laissez-faire policy had gone too far—and this was before evidence emerged linking Cambridge Analytica, Russian troll farms, and Facebook user data.

Section 230 was naturally a running theme during those hearings because most lawmakers do understand that the statute is the primary legal foundation on which platforms assert their neutrality. But in more recent hearings held last week, Republicans on the House Judiciary Committee amped up accusations that the major sites engage in partisan bias—asserting that they remove or demote “conservative” content while leaving up “liberal” content.

In this context, Lapowski quotes Rep. Matt Gaetz (R-FL), who stated “When you avail yourself to the protection of Section 230, do you necessarily surrender your rights to be a publisher or speaker? The way I read that statute now, it’s pretty binary. It says you have to be one or the other.” In other words, Republicans on the Committee acknowledge that the platforms have a First Amendment right to advance or demote any content they want, but doing so makes them “publishers” and theoretically nullifies the Section 230 immunity.

In response, Lapowski cites attorney Eric Goldman whose explication of Section 230 is more accurate than the congressman’s, even if it is somewhat pollyanna about the manner in which the statute has been applied in practice. Goldman correctly points out that Section 230 cannot logically vitiate a platform’s First Amendment right to control content when the very purpose of the statute is to encourage sites to control content. Unfortunately, that principle has too-often been argued in reverse—as a right to leave content online even when it is harmful or held unlawful by a court.

Gaetz’s line of inquiry caught my attention, though, because I said almost the same thing in my first post about Hassell, albeit in a very narrow and apolitical context, believing that 230 should not immunize Yelp against complying with a court order to remove unprotected speech. Nevertheless, if indeed Gaetz and his colleagues are threatening the platforms with “gutting” 230 as a response to alleged political bias, the complexity of this discussion just went to eleven.

Legally, socially, and politically, the whole subject of platform responsibility becomes disturbingly muddied amid accusations of “partisanship,” especially in a climate in which too many mainstream Republicans have lately embraced content that any reasonable person of any political party should find objectionable. For instance, if Facebook were to drop the Infowars page, would House GOP members consider this anti-conservative bias? I ask because it was not that long ago when people seemed to know the difference between a conservative like George Will, and a tinfoil-hat-wearing sociopath like Alex Jones.*

This, of course, has been one of the “benefits” of democratization through internet technology: it has coalesced, legitimized, and, most importantly, monetized crazy people. What we used to call the “lunatic fringe” of both the right and the left has now moved into the center. We’ve entered a new reality in which American citizens not only want to thank Vladimir Putin, if indeed he meddled with the election in Trump’s favor, but they have the means do declare this insanity in public and to build solidarity with other citizens who are likewise deluded. Or are these even American citizens at all? Are they Russian trolls being paid to make more mischief? Or bots? We have no idea.

What a bipartisan Congress ought to be able to recognize at this point is that a completely unfettered internet (i.e. one without platform responsibility) has not yielded a stronger, more stable, more rational body politic. To the contrary, even as platforms claim that they’re taking more responsibility, there may be no ameliorating the kind of factionalism and mob mentality that the internet fosters so perfectly, and which the American Framers feared so presciently.

We’re living in a reality where reasoned debate on almost any issue is consumed by the circus—by forces that are visibly hammering at the foundations of the Republic—and the last thing we need to inject into a policy discussion about platform responsibility is the rhetoric of partisanship. The fundamental purpose of Section 230 remains sound while its flaws are fairly nuanced.

And while I would personally love to see Facebook remove Infowars and Antifa** for the sake of sanity, a dubious narrative accusing social platforms of political bias cannot be the proper framework for reasoned discussion about the flaws in Section 230. It should instead be the aim of representatives in both parties to address the specific mechanisms by which a law written to motivate “good samaritans” has too often shielded bad ones.


*UPDATE:  As of July 27, 2018, CNN reports Facebook has suspended Jones’s personal profile page.

**At the time of writing, this referred to the militant, violent factions worldwide that identify themselves as Antifa. It has since become a much more muddled identification.

CA Supreme Court in Hassell Reveals Sec. 230 is a Catch-22

First, a refresher. The broad immunity provision known as Section 230 of the Communications Decency Act was adopted in 1996 as an incentive to internet service providers to take affirmative steps to remove material. Congress wanted to encourage sites to take down certain types of offensive or obscene content (e.g. child porn), and the ISPs asserted, quite reasonably, that taking such action should not render them “publishers,” which would then leave their companies vulnerable to endless litigation stemming from unlawful content posted by users.

Since then, however, Section 230 immunity has been interpreted in court cases, and portrayed in the blogosphere, as a blanket protection allowing sites to take no action to mitigate harm by removing unlawful or harmful content. For the past 20 years, Section 230 has provided the statutory basis for ISP claims of universal neutrality—the “just a platform” argument—no matter what occurs on their sites. This premise was soundly rejected by both parties in Congress during hearings conducted in response to evidence that Russian agents had purchased American political ads on major platforms.

Hassell v. Bird

The facts of this case are quite simple. Ava Bird posted three reviews of Dawn Hassell’s law firm on Yelp, and these were held by a California trial court to be defamatory. No party disputes the unlawfulness of the reviews. Hassell successfully sued Bird and purposely did not name Yelp as a defendant in her litigation. The court ordered Bird to remove the reviews and also issued an order to Yelp to remove the content even though it was a non-party to the litigation.

Yelp, along with a host of amici, argued that the court order violated both Section 230 and its right to due process. A California Court of Appeals upheld the injunction, but this week, the State Supreme Court reversed, with the majority holding that the injunction indeed violates Section 230 and, thus, it was unnecessary to rule on the due process claim. Nevertheless, a concurring opinion by Justice Kruger does address the due process issue and holds that Yelp is correct in asserting that it had a right to its “day in court.”

So, as a practical matter, if you were in Hassell’s position, here’s the Catch-22 emphasized in this case: Section 230 forecloses the option of suing a web platform for harm stemming from unlawful conduct by a user. BUT, in this case, because Hassell did not name Yelp as a party, it then claimed that it was denied due process and, therefore, should not have to comply with a court order to remove Bird’s reviews. If that sounds like the platform gets to do whatever it wants, that’s because it is.

The CA Supreme Court described Hassell’s decision not to name Yelp a “litigation strategy” employed to “accomplish indirectly what Congress has clearly forbidden them to achieve directly.” If Congress chooses to address some of the the unintended consequences of Section 230, this seems like a statement worth underlining. Because Hassell’s decision not to sue Yelp—to hold them in no way liable for the harm done by Bird—appears to this reasonable observer as entirely consistent with the intent of 230 to shield platforms from costly and chronic litigation. As Justice Liu states in his dissent …

“No one has burdened Yelp with defending against liability for potentially defamatory posts. Here, the trial court ordered Yelp to remove postings that have been already adjudicated to be defamatory. Hassell sued Bird, not Yelp, and the litigation did not require Yelp to incur expenses to defend its editorial judgments or any of its business practices.”

That is the heart and soul of Section 230 at its origin, and it is consistent with recent declarations by both parties in Congress that the immunity in the CDA was never designed to obviate all platform responsibility. To the contrary, it was designed to encourage that responsibility. So, to the extent the majority opinion in this case rests on a plausible, or even reasonable, reading of the statute, this case may serve as guidance to Congress for considering revision of Section 230.

Is the language of 230 problematic?

Specifically, the majority opinion holds that Section 230(e)(3) bars this injunction against Yelp as a non-party due to the wording, “No cause of action my be brought and liability may be imposed under any State or local law that is inconsistent with this section.” Thus, if it is this court’s understanding that the order for Yelp to remove unlawful content is a prohibited “cause of action,” but that a plaintiff is simultaneously barred by the same statute from actually suing Yelp, then it may be time for Congress to reconcile exactly this discrepancy.

I agree completely that Yelp should not be sued, or otherwise held liable, for any harm that may have been done to Hassell through the unlawful conduct of Bird. But in the realities of the digital market, where serious harm is both easily and cheaply effected, there is no justice in holding that a platform’s immunity from costly liability extends to an immunity from taking responsible, mitigating action which costs nothing.  In this regard, Justice Kruger’s concurring opinion also recognizes the difficult realities of the statute, stating…

“Section 230 has brought to an end to a number of lawsuits seeking remedies for a wide range of civil wrongs accomplished through Internet postings—including, but not limited to, defamation, housing discrimination, negligence, securities fraud, cyberstalking, and material support of terrorism.”

And in fairness, she further states…

“Whether to maintain the status quo is a question only Congress can decide. But at least when it comes to addressing new questions about the scope of section 230 immunity, we should proceed cautiously, lest we inadvertently forbid and even broader swath of legal action than Congress could reasonably have intended.”

Justice Cuéllar concurred with the opinion on the basis that a proper finding of fact was not made regarding Yelp’s conduct that would render it properly a subject to an injunction as a non-party. But at the same time, he had this to say about Section 230 immunity …

“To the extend the Communications Decency Act merits its name, it is because it was not meant to be—and it is not—a reckless declaration of the independence of cyberspace. Nothing in section 230 allows Yelp to ignore a properly issued court order meant to stop the spread of defamatory or otherwise harmful information on the Internet.”

Ouch. That allusion to Barlow is a pretty solid kick right in the EFFin gut. And that’s from a justice ruling in Yelp’s favor—for now. Suffice to say, there is plenty in this decision that stops short of the internet activist view that Section 230 immunity is both absolute and sacrosanct. Even the majority opinion is tempered by editorial comments acknowledging that platform irresponsibility causes tangible social harm.

As a final comment, I’ll pose the following food for thought:

Once a court has vitiated the role of the original author of some unlawful content (i.e. Bird has been found guilty and ordered under pain of contempt to remove her reviews), how is it that the platform which continues to publish the unlawful content is not then held to be the “author” of that content? If I plagiarize a work, I am guilty as the “author” of the plagiarism; and if I further use plagiarized material to defame someone, the original author is not liable for the defamation; I am.

Moreover, if Bird requests that Yelp remove her reviews and they do not, is Yelp not violating her First Amendment rights by means of coerced speech; and are they also not potentially liable for forcing her into a state of contempt of court by means of that coerced speech?