Academics Propose Tweaks to CDA Section 230
When EFF co-founder John Perry Barlow delivered his Declararion of the Independence of Cyberspace in Davos, Switzerland in February of 1996, it was in response to the Telecommunications Act, which had become law just a month earlier. In this speech that would become a manifesto for the industry’s libertarian nature, Barlow proclaimed the web as a place beyond the scope of legislation, a “home of mind” that would be self-governed by the only law people really need—the Golden Rule. Ironically enough, though, a part of the Telecommunications Act known as Section 230 of the Communications Decency Act, is at least one cyber law that the EFF and similar organizations believe is sacrosanct—even to the extent that it should protect those who break the Golden Rule in some very ugly ways.
Section 230 of the Communications Decency Act was designed to support good samaritans, but those who defend its status quo today are often blind to the reality that it provides cover for many bad samaritans, which is the term used in the title of a new paper called The Internet Will Not Break: Denying Bad Samaritans Section 230 Immunity. It’s authors, law professor Danielle Keats Citron at the University of Maryland and Benjamin Wittes of the Brookings Institute, focus primarily on influence of the courts, which have consistently applied the Section 230 liability shield so broadly as to distort—if not invert—the original intent of the statute.
The paper begins with a description of the social media site Omegle, whose slogan “Talk to Strangers!” is the antithesis of that rule (right after the Golden one) that our parents used to preach. As Citron and Wittes put it, “Omegle is not exactly a social media site for sexual predators, but it’s fair to say that a social network designed for the particular benefit of the predator community would look a lot like it.” The point the authors are making is that the site’s own disclaimers acknowledge their awareness that predators use the platform, which in any non-web context, would be an admission of potential liability for harm that may come to children. But thanks to Section 230 of the CDA, the site can basically say, “Swim in our pond at your own risk. Pirhannas happen.”
As the paper describes, CDA 230 was a Congressional response to the 1995 case Stratton Oakmont v. Prodigy, in which the service provider’s voluntary, good-faith efforts to weed out noxious content from its platform provided the legal basis for the plaintiff to hold Prodigy liable for defamation committed by a third-party user of its services. In other words, the mere fact that Prodigy exercised any control over content meant that it could be held liable for user actions that it could not reasonably have been expected to mitigate. The case cost Prodigy $200 million in damages, signalling reasonable fears among early investors in the internet that they could be the targets of civil or criminal liability suits stemming from the actions of their users.
In response to the Prodigy case—and especially because Congress wanted to encourage ISPs to remove “indecent” (i.e. pornographic) material from their sites—Section 230 was written to provide that actions taken by site managers to remove illegal or unsavory material would not, in a legal sense, make their companies “publishers” vis-a-vis potential liabilities stemming from third-party actions. “Lawmakers thought they were devising a limited safe harbor from liability for online providers engaged in self-regulation. Because regulators could not keep up with the volume of noxious material online, the participation of private actors was essential,” write Citron and Wittes.
That was 1996. Today, as many parties have observed, and the authors of this paper further explain, Section 230 paradoxically insulates content and behaviors that can be more toxic than anything it was originally intended to reduce. “…its overbroad interpretation [by the courts] has left victims of online abuse with no leverage against sites whose business model is abuse,” write the authors. While the internet industry, along with “digital rights” organizations, argue the absolute necessity to maintain the status quo of Section 230, Citron and Wittes counter that the liability shield too easily immunizes bad actors who knowingly allow, or intentionally invite, harmful conduct ranging from harassment and defamation to child sex-trafficking and terrorist propaganda. From the paper …
“A physical magazine devoted to publishing user-submitted malicious gossip about non-public figures would face a blizzard of lawsuits as false and privacy-invading materials harmed people’s lives. And a company that knowingly allowed designated foreign terrorist groups to use their physical services would face all sorts of lawsuits from victims of terrorist attacks. Something is out of whack—and requires rethinking—when such activities are categorically immunized from liability merely because they happen online.”
The authors emphasize one of my personal gripes whenever any kind proposed enforcement is claimed to be a threat to Free Speech, which is that defenders of Section 230 often overlook the myriad ways in which bad actors stifle the speech of their victims. For instance, the paper cites the website Dirty.com, which essentially trades in privacy-invading gossip about non-public figures; and if this enterprise were published on paper rather than online, it would have easily been sued out of existence by now. But thanks to Section 230, “Posts have led to a torrent of abuse, with commenters accusing the subjects of ‘dirt’ of having sexually transmitted infections, psychiatric disorders, and financial problems. [The Site Owner] has admittedly ‘ruined people sometimes out of fun.’ That admission is not against interest—he knows well that he cannot be sued for his role in the abuse because what users do is on them,” write Citron and Wittes.
In reference to the EFF, the paper quotes the organization as acknowledging that cyber-harassment and related activity does stifle the speech of users; but the authors also highlight the organization as one which treats Section 230 as “an untouchable protection of near constitutional status.” As reported in this post, the EFF and related groups are so committed to defending the status quo of Section 230 that they have defended its application in the the Backpage case, despite compelling evidence alleging that the site operators knowingly facilitated sex-traffcking of minors.
Something is clearly wrong when a law originally intended to protect children from mere exposure to sexually explicit material may be applied to protect criminals who facilitate the trafficking of children as prostitutes — simply because that facilitation happens online. It is possible that the Backpage case will wind up at the Supreme Court and that the egregious nature of the harm—child sex-trafficking—will be severe enough to recallibrate a judicial reading of the statute’s meaning and intent. Citron and Wittes view substantial reform from the bench as a long shot and, therefore, recommend that the courts at least limit the scope of Section 230 defenses to claims related solely to the publication of user generated content. By contrast, the authors describe the expanding application of the statute thus:
“Many legal theories advanced under the law do not turn on whether a defendant is a “publisher” or “speaker.” Liability for aiding and abetting others’ wrongful acts does not depend on the manner in which aid was provided. Designing a site to enable defamation or sex trafficking could result in liability in the absence of a finding that a site was being sued for publishing or speaking.”
Perhaps more realistically, the authors suggest that some statutory amendment is the only viable solution, and they contend that this can be achieved with a modicum of alteration, leaving intact the liability shield as it was intended for site operators acting in good faith. For instance, they suggest …
“Mirroring section 230’s current exemption of federal law and intellectual property, the amendment could state, ‘Nothing in section 230 shall be construed to limit or expand the application of civil or criminal liability for any website or other content host that purposefully encourages cyber stalking, nonconsensual pornography, sex trafficking, child sexual exploitation, or that principally hosts such material.’”
In essence, Citron and Wittes argue, this would allow the Twitters and Facebooks of the world to make good-faith efforts to weed out harmful or illegal content and remain protected by Section 230, while immunity would no longer apply to site owners who purposely profit from harmful conduct. The authors remind readers that this change simply removes the atuomatic immunity (i.e. the opportunity for bad samaritans to file motions to dismiss under Section 230) but in no way alters their rights as defendants in a potential litigation.
On the subject of free speech, the authors reject (as I have many times) the premise that just because nearly all internet activity takes the form of communication, this does inherently place service providers in a unique category universally protected by the First Amendment.
“… to the extent that our proposal is resisted on the grounds that online platforms deserve special protection from liability because they operate as zones of public discourse, we offer the modest rejoinder that while the internet is special, it is not so fundamentally special that all normal legal rules should not apply to it. Yes, online platforms facilitate expression, along with other key life opportunities, but no more and no less so than do workplaces, schools, and coffee shops, which are all also zones of conversations and are not categorically exempted from legal responsibility for operating safely.”
© 2017, David Newhoff. All rights reserved.Follow IOM on social media: