Section 230: Fix It or Lose It?

In 2015, Rep. Jackie Speier (D-CA) introduced a bill that would make it a federal crime to engage in what is generically called “revenge porn.”  I say generically because “revenge” alludes to a specific motive, usually that of a disgruntled ex-boyfriend who decides to get back at a former girlfriend by distributing intimate or sexually explicit images of her online.  There are revenge porn websites dedicated to hosting this type of material, and in some cases, site operators have engaged in extortion, demanding money from victims in exchange for removing their images. 

Naturally, the usual suspects responded to Rep. Speier’s proposal with the usual hand-wringing jitters, asserting that any implication of platform responsibility for almost anything will only lead to eroding the proper functioning of the internet.  (Is it functioning properly?)  As quoted in my 2015 post, Mike Masnick at Techdirt stated, regarding the Speier bill, “Trying to accurately describe what ‘revenge porn’ is for the sake of criminalizing its posting, will almost certainly have chilling effects on third parties and undermine the very intent of the CDA’s Section 230.”  [Emphasis added]

But legislation like this does not undermine the intent of Section 230 of the Communications Decency Act, and saying otherwise grossly misrepresents—in fact inverts—the goal of that liability shield when it was written into law in 1996.  Section 230 of the CDA was specifically enacted to encourage content moderation by platform owners to remove unlawful or harmful material.  Unfortunately, this “safe harbor” provision has since been reimagined by the internet industry, web activists, and some jurists as a legal foundation to avoid content moderation—even when ordered to do so by a court of law.  That is an aberration of what CDA230 was meant to achieve.

To date, forty-one states plus Washington D.C. have criminalized non-consensual use of sexually explicit or intimate visual material, and New York is poised to join this company with a new bill now proceeding through the State Assembly.  Notably, the language in this bill (similar to Speier’s federal proposal) suggests to me that identifying the criminality of this particular conduct is not so far outside the scope of legislative capacity as Masnick implied in 2015.  The New York bill states, “…with intent to cause material harm to the emotional, financial or physical welfare of another person …”  That doesn’t seem very complicated.  If the goal is to hurt someone, regardless of why, then criminal conduct may be present.  

Of course, the tech pundits don’t really mind criminalizing the behavior of the individuals who commit “revenge porn.”  I won’t accuse Masnick, the EFF, et al of supporting the people who engage in this type of conduct because they certainly do not.  What they do claim to be concerned about are the broader implications for internet platforms if they can be held liable under the criminal code, or even just directed by court order to remove material as a form of injunctive relief for victims.  Here, the critics rely on the well-worn generality that any gap in the great wall of Section 230 will only result in reactionary responses by well-meaning web platforms, which will then censor otherwise protected speech.  

Maybe I lack imagination, but it is actually impossible to fathom how providing a relatively narrow path to legal remedies for the victims of this singular crime can chill anything related to the normal functioning of most online activity. Someone wins a revenge porn case, and what?  We won’t be able to read the New York Times or buy sneakers on Amazon or watch Hulu?  Bullshit.  

As usual, the pundits tend to overlook the fact that due process is still required—that an alleged victim still has to prove her case and demonstrate how a named platform may be criminally or civilly liable for harm.  And in many cases, a platform may be responsible for nothing more than removing content without facing any further liability whatsoever.  Meanwhile, people have already been held criminally and civilly liable for various types of revenge porn, and material has been removed from various site, and the internet is still functioning.  In fact, one audacious law firm in Brooklyn, NY focuses on exactly these issues under the direction of attorney Carrie A. Goldberg, who says she became the lawyer she needed herself after an ex-boyfriend threatened to post naked pictures of her online.  

Since then, Goldberg’s firm has removed over 20,000 images on behalf of its clients—a number that simultaneously demonstrates there is efficacy in criminalizing non-consensual uses of material, but it also proves my point about due process and the narrowness of this focus.  In short, the socially-beneficial aspects of the internet really can endure the removal of many thousands of illegal or harmful files without the rest of us feeling a thing, and it is preposterous to believe otherwise.  Or as part of Goldberg’s Twitter bio explains:  F*uck your overbroad reading of CDA230.

I liked that slogan so much, I asked her for coffee mug with the words printed on it.  I guess it’s just the kind of nerd-moxie that makes my day, and Golberg’s firm certainly appears to have moxie to spare, as exemplified by this declaration on their website …

We are done living in a world of abuse and we are not afraid to sue the *&%$ out of schools, tech companies, and employers who tolerate it.  There are many ways to get justice for our clients – economic justice, restraining orders, advocacy in Campus Disciplinary proceedings, exposing a predator, getting the piece-of-shit thrown in jail.

To the extent this take-no-prisoners attitude accurately sums up a general shift in public sentiment (i.e. that some form of platform responsibility is mandated), I suspect the whinging chorus of internet activists may soon need to find a new cross to die on other than their adamantine devotion to the sanctity of Section 230.  In fact, it is conceivable that if the tech giants do not get on board and help tweak—or at least don’t stand in the way of tweaking—the application of this liability shield, they just might lose it altogether.

Apropos my last post about the implications of deepfakes, this universe of criminal conduct will likely become more complicated as parties willing to cause harm can more easily manufacture visual material that appears to reveal the intended target(s) engaged in embarrassing, or even illegal, activity.  For instance, most, if not all, of the revenge porn statutes criminalize visual material that depicts the “intimate parts” of the plaintiff bringing a claim, and this language would seem to fall short of criminalizing a deepfake in which the victim’s face has been seamlessly grafted onto someone else’s body.  Hence, the criminal codes may already be lagging behind the technology.

And, of course, the implications here are much broader than non-consensual pornography.  Just look at the consequences (in this case almost certainly deserved) for Virginia Governor Ralph Northam over a 1984 yearbook in which he appears at least adjacent to, if not depicted in, racist and demeaning photographs. Those photos are real, and Northam must deal with the consequences, but we are now well past the point when far more sophisticated imagery than yearbook photos can be fabricated out of thin air by someone with rudimentary skills.  Combine the level of destruction that can be so easily achieved with precedent application of Section 230 (e.g. Yelp refusing to remove a handful of libelous reviews), and it seems to me that change is coming, and the big platforms may want to get on board.  

As I posted last July, a new development in this narrative—and one I consider unfortunate—is the addition of partisan politics to the mix.  Some conservative Republicans in Congress have at least hinted at eradicating Section 230 in response to allegations that web platforms promote left-leaning content over right-leaning content.  Clearly, this specific complaint implies a tangled mess of a debate that nobody should want; but if the legislative Venn diagram encompasses those who want to kill 230 with those who want to carve out reasonable remedies for online harm, Google and Co. may need to change their revisionist narrative on the purpose of that liability shield, or risk losing more than symbolic battles.    

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)