Online service providers (OSPs) are generally shielded by two major statutes from liabilities that may stem from the content uploaded by users of their platforms. Section 512 of the DMCA (1998) provides the conditions under which an OSP may avoid liability for copyright infringement, and Section 230 of the Communications Decency Act (1996) covers just about every other kind of content.
In simple terms, any platform that allows users—rather than site owner/operators—to upload content. Sites like YouTube, WordPress, Facebook, Twitter, etc. are not considered “publishers” under CDA Section 230 and, therefore, remain free from liability for nearly any harm that may be caused by the user-generated content hosted on their sites. So, if a Twitter mob incites assault or violence, Twitter is generally in the clear. If an IS recruiting video inspires a lone-wolf attack, YouTube is not held responsible. If fake news fills a Facebook feed, then Facebook is not responsible for publishing lies or slander because, under the statute, Facebook is not the “publisher” of the material.
“Digital rights” groups defend CDA 230 as an essential protection for free speech online and as a mechanism for the development of the web overall. In general, this argument has a lot of merit, but these activist organizations are not above straining their support of Section 230 beyond reason at times. As discussed in this post, the Electronic Frontier Foundation came strangely close to defending the alleged criminal activities of the owners of Backpage while seeking to defend the principles of Section 230. In that particular case, the indictment of last October states that the owners of the site took direct action to further capitalize on the illegal sex trade, which they had to know contributed to more than 90% of site revenues.
Hence, the assumed ignorance of the OSP management, upon which the Section 230 shield is based, seems reasonably lost in that case; and EFF’s defending Backpage on principle alone appears to defy common sense. The Supreme Court is scheduled to consider whether or not to take up Doe v. Backpage during its conference tomorrow. If the Court agrees to consider the case, expect to hear a lot about Section 230 in the coming weeks.
A Mundane Example
As a very simple example of what we’re talking about, I accidentally called a scam Apple support service one day because I was rushing and because a number for the fake service appeared at the top of Google’s search results. Fortunately, I realized I’d called a predatory operator and hung up before it cost me anything, but for those who were cheated out of credit card or other information, doesn’t it seem reasonable that Google should be held accountable for having taken fees to place the bogus service in the advertised top spot? It seems to me they should. But what about monetizing content that may contribute indirectly to assault, battery, or murder?
Pulse Nightclub Suit
In December, a Michigan-based law firm filed suit in Florida against Google, Facebook, and Twitter on behalf of three families who lost loved ones in the Pulse Nightclub shooting of June 12, 2016, where Omar Mateen shot and killed 49 people, making his the largest mass-shooting in US history. The foundation of the case, led by attorney Keith Altman is that the monetized hosting of content produced by the Islamic State “provided material support to terrorists” in violation of federal law and contributed to the actions taken by Mateen. The Orlando Sentinel, reporting on the story, quotes internet and communications attorney J.B. Harris stating, “It’s creative. It’s bold. But I don’t think he’s going to succeed under the federal anti-terrorism statute that he cites.”
That sounds about right to my layman’s ear. In this case, I suspect Altman would have a very high burden, even to connect the IS material to Mateen’s decision to act, let alone to hold the OSPs responsible for the tragedy under that statute. Moreover, I don’t think the public is going to warm to the idea of accusing web platforms of “providing material support to terrorists,” via third-party content, least of all in the climate we’re now entering.
Nevertheless, the Sentinel notes that attorney Harris speculates that Altman might have a better hearing in a Florida local court as a “strict negligence or liability” case, which does begin to have the ring of some balance to it with regard to alleged liability among the OSPs in this circumstance. I suspect the case would be a long shot either way, but Altman is correct in his observation that the major OSPs have historically enjoyed tremendous freedom in maintaining a laissez-faire approach when it comes to monitoring content on their platforms.
Possible Change in Attitudes?
As speculated in my last post, the bitter taste of fake news and Russian hacking may shift public opinion toward a greater willingness to hold major platforms responsible for content more than they have to date. In particular, when an OSP earns revenue by hosting harmful content, whether it’s a scam like the one noted above or an IS recruiting video supported by brand advertising on YouTube, we may begin to see some cracks in public support for the “we don’t know” defense, regardless of the liability shields.
With regard to copyright infringement and Section 512, we know that the major OSPs have played an ongoing and repetitive semantic game on the theme that “they cannot know” what’s happening on their sites. As I’ve said in the past, this argument is especially coy when it comes from Google, which vows to one day know us better than we know ourselves—but apparently will remain ignorant about the content on its own platforms. I don’t think anyone disputes that content moderation poses technical and legal challenges. But so far, the conversation has been skewed toward a bias that any moderation is undesirable because it’s tantamount to censorship; and this has benefitted the platforms by leaving them free to monetize nearly anything.
With cases like Backpage, and perhaps this Pulse Nightclub suit, playing out against a landscape of users coming to grips with some of the inherent flaws of social media platforms, we may see OSPs take more direct, voluntary action to mitigate the use of their services by bad actors. Or as Charlie Warzel writes, in a related article on BuzzFeed, “…trotting out the ‘But we’re just a digital platform’ excuse as a quick and easy abdication of responsibility for the perhaps unforeseen — but maybe also inevitable — consequences of Big Tech’s various creations is fast becoming a nonstarter.”
Photo by scanrail
I no longer care to comment here much because David is basically just trolling at this point, but I did want to correct a substantial mistake in his post.
He writes that:
In simple terms, any platform that allows users—rather than site owner/operators—to upload content. Sites like YouTube, WordPress, Facebook, Twitter, etc. are not considered “publishers” under CDA Section 230 and, therefore, remain free from liability for nearly any harm that may be caused by the user-generated content hosted on their sites.
This is correct. But then he makes a big mistake, saying:
Hence, the assumed ignorance of the OSP management, upon which the Section 230 shield is based, seems reasonably lost in that case; and EFF’s defending Backpage on principle alone appears to defy common sense.
Section 230 is not based on the idea that people running online sites do not or cannot practically police the content on the site. It’s actually based on the opposite idea, that the site needs legal protection in order to police content.
Section 230 — more properly known as 47 USC 230 — was part of the infamous Communications Decency Act of the 1990s which sought to restrict the spread of adult material online, the heart of which was found to be unconstitutional by the Supreme Court, which wrote that the government may not try to protect children by imparing adults from accessing material that would be unfit for children. Section 230, however, remained on the books.
The actual purpose of section 230 is this:
At the time of its passage, there were two competing bodies of precedents governing third party content put onto online services. One of these was that an online service was similar to a publisher or a bookstore, which can be choosy about what they publish or stock: when defamatory materials are published or sold by them, they share the liability because they should check these things for accuracy before they offer them. (Never mind that this is likely outdated for bookstores and impractical for the new wave of print on demand publishers). The other is that an online service is more like a copy shop; they have an unattended coin operated xerox machine available to customers. Anyone can use it by putting in some change. The owners of the xerox don’t see what’s being xeroxed and are uninvolved. They are not held liable.
What everyone could agree on though, was that if an online service took an active hand in policing what was put onto their service by users, such as by moderating comments, or deleting inappropriate or unlawful material, then they were absolutely actively engaged and would be liable for anything that they missed, misjudged, or failed to catch in a timely fashion.
The puritanical backers of the CDA saw the problem: they were insisting that sites police themselves for indecent materials posted by users, and would therefore cause sites to all be treated as publishers for all manner of content posted due to their obligations to keep porn out of the hands of minors. This would result in all of them shutting down the ability of users to post anything on their platforms, encourage growth of the Internet to occur abroad rather than in the US, etc.
So the sites had to have immunity so that they could take an active hand, not so that they could remain safely ignorant, as David wrongly claimed.
In any event, it’s been a good thing: Section 230 protects sites that keep out spam, delete troll posts (ironic here) or other abuses, block the spread of malicious software, and, in fact, which try to keep out inappropriate content such as online ads for prostitution. What it doesn’t require is that everyone share the same standards as to what they prune out, and it also doesn’t require perfection on the part of the site management, lest one thing sneak in despite aggressive and costly policing, which could still impose such great liability as to bankrupt the site.
If section 230 were repealed, you’d see either more of a Wild West, because it would be safer to be ignorant than to just not be a perfect policeman of one’s site, or you’d see any facility by which an individual can post anything to the Internet shut down; only major corporations would use the net and then only carefully to avoid liability.
Either way a worse outcome. (Unless, I suppose, you hate individuals posting things, which is an odd one if that’s what David wants, since this blog would be shut down by WordPress in a second without WordPress being protected against whatever David or third parties might say on it)
Section 230 is an excellent example of the principle that the perfect is the enemy of the good.
Well, Anonymous, your insight is welcome, even if your insults are not. But that’s fine. 230 is a lot more complex that even you present here, and I suspect we may be hearing more about it in the near future. If you found this post to be an argument to do away with the liability shield in the CDA, then you were looking for it because that’s not my view. Is my description oversimple? Absolutely. I’m a non-attorney writing for non-attorneys. You’re correct with regard to how the liability shield came to be, but you’re wrong to suggest that it doesn’t create some potentially troublesome, unintended consequences, as the Doe v Backpage case reveals. You can have an opinion about that case, but don’t pretend it’s actually simple; the CDA was not intended to effectively legalize child sex trafficking, which is what’s at stake in an over broad interpretation of the statute. The history you describe is a compromise between the OSPs (at the time the telecoms) and Congress to create the liability shield in the CDA (similar to the negotiated shield in the DMCA), but contemporary sites absolutely rely on a similar veil of ignorance argument in order to retain said shield. In simple terms, that is the crux of the argument that has been used in court, including in the Backpage case, which very much hinges on what the owners knew or did not know. There will likely be other CDA-related posts to come, and you’re welcome to trash those and call me a troll, if that makes you feel better.