Crying Wolf in the Section 230 Debate
After the 2016 election and news began to break about the amount of fake information and manipulative content that was being financed by various parties, it seemed clear that Section 230 of the Communications Decency Act (1996) would soon be the number-one cyber policy issue in the United States. Recently, in response to the latest horror show of back-to-back spree shootings—and after it was reported that the El Paso shooter posted his white-supremacist manifesto on the basement board 8Chan—the subject of platform liability once again blew up across media outlets large and small.
Defenders of the online service provider liability shield known as Section 230 insist it is the keystone legislation that makes the internet as we know it possible. But this begs the first question for framing any reasonable discussion about the broader issue: Who said the internet as we know it is ideal?
Naturally, the folks who make billions from the web’s current design think it’s perfect in much the same way those who make billions in the extraction industries think the environment is doing just fine. And the network of organizations and academics who receive substantial funding from Silicon Valley also like to promote the message that we have Section 230 to “thank” for all the wonderful things the internet does for us. But how true is that statement?
Even before addressing the statute itself, it is important to remember that 100% of internet services that do not depend upon users publishing content to a public platform have nothing to do with Section 230. In other words, e-commerce, navigation, reading news, downloading e-Books, streaming movies and music, making travel reservations, emailing, document sharing, and searching databases and archives, etc. are all major benefits of digital life that owe little or nothing to the existence of Section 230. So, when the pundits repeat the imperative, “Save the internet as we know it” this is a tad overwrought because the CDA is concerned with one form of internet use—and not necessarily its best use by a longshot.
Facebook, YouTube, Twitter, WordPress, Reddit, Yelp!, and similar providers are entirely dependent upon user-generated content (UGC); and many platforms that are not wholly dependent on UGC (e.g. The New York Times) still consider it beneficial to host comments by their readers. Even this blog hosts comments, and I would certainly not want to be liable for inadvertently “publishing” material by a third party that could trigger some cause of action. And that’s where the Section 230 saga begins—with an anonymous user posting defamatory comments on a financial bulletin board in 1995.
The Martin Scorsese film Wolf of Wall Street, starring Leonardo DiCaprio, dramatizes the memoir of Jordan Belfort, who co-founded the sham investment firm Statton Oakmont in 1989 to engage in pump-and-dump schemes—manipulating stock prices and defrauding investors while making millions for Stratton’s employees. Belfort and his partner Danny Porush were indicted in 1999 for securities fraud and money laundering, but four years earlier, while still riding high in every sense of the word, they were ballsy enough to sue online service company Prodigy because somebody on the “Money Talk” chat board opined that the Stratton guys just might be criminals.
In Stratton Oakmont v. Prodigy, the Supreme Court of New York held that because the platform exercised editorial control over “Money Talk,” this meant the company was a “publisher” of users’ comments and, therefore, liable for any cause of action stemming from those comments. (On a side note, I am curious as to how the comment(s) met the standard of defamation when Stratton Oakmont was under almost constant scrutiny by securities officials, but older state court records can be hard to locate, and I cannot find the original complaint.)
Anyway, the important point about the Prodigy case for cyber policy is that the fledgling internet industry justifiably freaked out at the decision. At that time, Congress was still drafting the CDA, which was designed to encourage, not discourage, platform responsibility and moderation. For instance, among the stated goals of the provision …
(5) to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.
So, how is it that Section 230 came to actually shield sites that either refuse to mitigate some of that conduct, or worse, purposely profit from that conduct? Well …
In response to the Prodigy ruling, early internet developers and entrepreneurs presented a very reasonable complaint: If the government wants service providers to moderate content, but the courts find that said moderation will make them liable for users’ material, nobody will ever invest in the development of internet platforms that rely on user-generated content. The potential liability is just too great, and nobody can effectively scrutinize millions of inputs every hour.
Thus, the ‘Good Samaritan’ Clause was drafted as a statutory remedy to ensure that good-faith efforts to moderate content would not trigger liability. Specifically, the statute refers to material that users or providers may consider “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”
In other words, platforms were encouraged to maintain what is often referred to now as “community standards,” and in return, the government made it clear that enforcement of such standards would not render the service provider liable for harmful material posted by their users. From the statute …
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,
(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—
- (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
- (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
The Current Reality
That was twenty years ago. The publicly available internet was new and nobody could be quite sure what kind of platforms would emerge as industry leaders. Over the years, the courts largely interpreted Section 230 as a blanket immunity for service providers, typically citing the statute as grounds to dismiss almost any complaint against almost any service provider. Consequently, the platform owners enjoyed the financial bounty that comes from hosting EVERYTHING while describing their reluctance to remove even harmful material as a mandate to “protect free speech.”
The predatory culture of Silicon Valley, fostered partly by Section 230, is how Facebook wound up supporting (and receiving money from) Russian agents targeting the American electorate with disinformation campaigns. It is how Cloudflare rationalized hosting 8Chan until this month, when the troglodytic chat board was identified in the mainstream media as a crucible for hate-mongering, and where the El Paso shooter published his pre-assault “manifesto.”
But remember that the statute expressly reminds service providers that it is not their job to protect free speech. That is just a clue as to how the internet industry, with the help of the courts, turned the intent of Section 230 inside out. Rather than use the government grant of a broad liability shield to engage in responsible moderation, many platforms asserted 230 as absolute immunity and, therefore, shirked moderation—even where clear harm is being done. Then, to further aggravate matters, the industry promoted this laissez-faire policy as a public benefit.
Section 230 is the statutory support for conduct like revenge porn, or (perhaps most ironically) it is the law that enables a website to intentionally trade in defamation as a business enterprise. That’s right. A Congressional response to a bad defamation ruling in 1995 now protects a site owner who literally uses defamation as content to earn advertising revenue. That’s how screwed up the current application of the law is.
If Stratton Oakmont was emblematic of the financial-sector corruption that typified the 1980s and 90s, today’s big-ticket hucksters are the internet companies selling the story that our interests are best served by their unfettered ability to monetize not just every bit of content—but our data profiles. And while many citizens and lawmakers have lately seen through that charade, the tech-utopians will continue to say that recent calls for greater platform responsibility is a “moral panic,” that we have been overreacting to events since 2016.
As the drumbeat grows louder for revision for of Section 230, the vast and well-funded industry voices will cry wolf once again. They will once again declare that the internet faces an existential threat, and they will once again not clearly define what they mean by “the internet.” Because, frankly, the companies that will spend the most capital defending Section 230 are the ones whose platforms are not doing the world nearly so much good as they like to believe.
The Facebook scandals that have unfolded since early 2016 demonstrate clearly that user-dependent sites like social media platforms are opaque in their operations; and there is no evidence whatsoever to indicate that more online “engagement” has produced a more enlightened, rational, civilized, or thoughtful discourse in the collective management of the Republic. To the contrary, if anyone thinks social media has not been the primary catalyst driving people apart, I’ve got some old Stratton Oakmont positions to sell you. Consequently, let’s maintain a little perspective as we approach what seems like an inevitable debate over Section 230 reform.
© 2019, David Newhoff. All rights reserved.Follow IOM on social media: