The Accountability of Web Platforms

Online service providers (OSPs) are generally shielded by two major statutes from liabilities that may stem from the content uploaded by users of their platforms.  Section 512 of the DMCA (1998) provides the conditions under which an OSP may avoid liability for copyright infringement, and Section 230 of the Communications Decency Act (1996) covers just about every other kind of content.

In simple terms, any platform that allows users—rather than site owner/operators—to upload content.  Sites like YouTube, WordPress, Facebook, Twitter, etc. are not considered “publishers” under CDA Section 230 and, therefore, remain free from liability for nearly any harm that may be caused by the user-generated content hosted on their sites. So, if a Twitter mob incites assault or violence, Twitter is generally in the clear. If an IS recruiting video inspires a lone-wolf attack, YouTube is not held responsible. If fake news fills a Facebook feed, then Facebook is not responsible for publishing lies or slander because, under the statute, Facebook is not the “publisher” of the material.

“Digital rights” groups defend CDA 230 as an essential protection for free speech online and as a mechanism for the development of the web overall.  In general, this argument has a lot of merit, but these activist organizations are not above straining their support of Section 230 beyond reason at times. As discussed in this post, the Electronic Frontier Foundation came strangely close to defending the alleged criminal activities of the owners of Backpage while seeking to defend the principles of Section 230. In that particular case, the indictment of last October states that the owners of the site took direct action to further capitalize on the illegal sex trade, which they had to know contributed to more than 90% of site revenues.

Hence, the assumed ignorance of the OSP management, upon which the Section 230 shield is based, seems reasonably lost in that case; and EFF’s defending Backpage on principle alone appears to defy common sense.  The Supreme Court is scheduled to consider whether or not to take up Doe v. Backpage during its conference tomorrow.  If the Court agrees to consider the case, expect to hear a lot about Section 230 in the coming weeks.

A Mundane Example

As a very simple example of what we’re talking about, I accidentally called a scam Apple support service one day because I was rushing and because a number for the fake service appeared at the top of Google’s search results.  Fortunately, I realized I’d called a predatory operator and hung up before it cost me anything, but for those who were cheated out of credit card or other information, doesn’t it seem reasonable that Google should be held accountable for having taken fees to place the bogus service in the advertised top spot?  It seems to me they should. But what about monetizing content that may contribute indirectly to assault, battery, or murder?

Pulse Nightclub Suit

In December, a Michigan-based law firm filed suit in Florida against Google, Facebook, and Twitter on behalf of three families who lost loved ones in the Pulse Nightclub shooting of June 12, 2016, where Omar Mateen shot and killed 49 people, making his the largest mass-shooting in US history.  The foundation of the case, led by attorney Keith Altman is that the monetized hosting of content produced by the Islamic State “provided material support to terrorists” in violation of federal law and contributed to the actions taken by Mateen.  The Orlando Sentinel, reporting on the story, quotes internet and communications attorney J.B. Harris stating, “It’s creative. It’s bold. But I don’t think he’s going to succeed under the federal anti-terrorism statute that he cites.”

That sounds about right to my layman’s ear.  In this case, I suspect Altman would have a very high burden, even to connect the IS material to Mateen’s decision to act, let alone to hold the OSPs responsible for the tragedy under that statute.  Moreover, I don’t think the public is going to warm to the idea of accusing web platforms of “providing material support to terrorists,” via third-party content, least of all in the climate we’re now entering.

Nevertheless, the Sentinel notes that attorney Harris speculates that Altman might have a better hearing in a Florida local court as a “strict negligence or liability” case, which does begin to have the ring of some balance to it with regard to alleged liability among the OSPs in this circumstance. I suspect the case would be a long shot either way, but Altman is correct in his observation that the major OSPs have historically enjoyed tremendous freedom in maintaining a laissez-faire approach when it comes to monitoring content on their platforms.

Possible Change in Attitudes?

As speculated in my last post, the bitter taste of fake news and Russian hacking may shift public opinion toward a greater willingness to hold major platforms responsible for content more than they have to date.  In particular, when an OSP earns revenue by hosting harmful content, whether it’s a scam like the one noted above or an IS recruiting video supported by brand advertising on YouTube, we may begin to see some cracks in public support for the “we don’t know” defense, regardless of the liability shields.

With regard to copyright infringement and Section 512, we know that the major OSPs have played an ongoing and repetitive semantic game on the theme that “they cannot know” what’s happening on their sites.  As I’ve said in the past, this argument is especially coy when it comes from Google, which vows to one day know us better than we know ourselves—but apparently will remain ignorant about the content on its own platforms. I don’t think anyone disputes that content moderation poses technical and legal challenges. But so far, the conversation has been skewed toward a bias that any moderation is undesirable because it’s tantamount to censorship; and this has benefitted the platforms by leaving them free to monetize nearly anything.

With cases like Backpage, and perhaps this Pulse Nightclub suit, playing out against a landscape of users coming to grips with some of the inherent flaws of social media platforms, we may see OSPs take more direct, voluntary action to mitigate the use of their services by bad actors.  Or as Charlie Warzel writes, in a related article on BuzzFeed, “…trotting out the ‘But we’re just a digital platform’ excuse as a quick and easy abdication of responsibility for the perhaps unforeseen — but maybe also inevitable — consequences of Big Tech’s various creations is fast becoming a nonstarter.”


Photo by scanrail

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)