The Precarious Politics of Reigning in Silicon Valley

As our attention turned to concerns about disinformation, hate speech, and data security after the 2016 election, it became clear that the big cyber policy on deck was going to be a fight about Section 230 of the Communications Decency Act (1996).  For some detailed discussion about this legislation, see posts here, here, and here; but in nutshell, Section 230 shields online platforms against liability for potential harm that may result from the conduct of its users.  It is occasionally and improperly associated with copyright infringement, from which platforms are largely shielded by Section 512 of the DMCA (1998). 

Although 230 was never intended to provide blanket immunity for all sites hosting any kind of user-generated content, most courts over the 24 years since the law was adopted have interpreted it as a blanket immunity for all sites hosting any kind of user-generated content.  This includes content that may be posted for the express purpose of causing harm like harassment, defamation, revenge porn, fraud, or disinformation.  230 is the statutory reason why site owners respond with a shrug or, at best, a feeble explanation for hosting material that goes beyond mere offense, as we have seen its power to alter truth itself.  If you were mystified, for instance, by Zuckerberg’s sphinx-like reasoning that Facebook would maintain Holocaust denial pages because they are merely “misinformation and opinion” rather than “hate speech,” that was just one manifestation of the ideological flaw, which helped write Section 230 two decades ago.   

“We were naïve. We were naïve in a way that is even hard to recapture. We all thought that for people to be able to publish what they want would so enhance democracy and so inspire humanity, that it would lead to kind of flowering of creativity and emergence of a kind of a collective discovery of truth.”

Those are the words of former FCC Chairman Reed Hunt lately expressing regret for the adoption of Section 230, clearly identifying the erroneous underlying premise, which many critics now refer to as tech-utopianism.  And while it is somewhat encouraging to finally see a greater appetite for holding platforms accountable for some of their ill-effects, this mood change is anything but clearly definable.  Instead, we hear cacophony of disparate—even competing—rationales for reigning in Big Tech, and if this chaos cannot manifest as rational policy, Big Tech may win the status quo, which they spare no expense trying to maintain.   

For example, voices as incompatible as Vice-President Joe Biden and Senator Ted Cruz have both raised the specter of abolishing Section 230, but for very different reasons.  Biden and others see the liability shield as encouraging a platform like Facebook to continue hosting false information (e.g. Holocaust denial), while Cruz and other Republicans complain that social platforms are biased against conservatives.  But good luck trying to reckon with the devil in those details.

Would Biden include headlines or stories from left-leaning organizations that are inaccurate?  Would Cruz consider social media platforms removing Alex Jones, or the hosting providers dropping The Daily Stormer as examples of anti-conservative bias these days?  It becomes easy to imagine how a pragmatic and sober debate about Section 230 can get lost amid the inherent tribalism implied by just those two voices alone.

From a very different sector, David McCabe reports for the New York Times that a “motley” group of corporations, including Disney, IBM, and Marriott, are gunning for Section 230. “The companies’ motivations vary somewhat,” writes McCabe.  “Hollywood is concerned about copyright abuse, especially abroad, while Marriott would like to make it harder for Airbnb to fight local hotel laws. IBM wants consumer online services to be more responsible for the content on their sites.”

As prefaced above, note that even The New York Times will erroneously include copyright in a conversation about Section 230, though in fairness, the underlying principle—namely that no platform should ever be responsible for material published by users—is fundamentally the same in 230 as the DMCA’s 512.  Still, let us assume that especially because the Times used “Mickey Mouse” in the headline, this story will be interpreted by many as “Copyright maximalist Walt Disney Company wants to break the internet again,” or something to that effect.  And viola!  We are no longer having a conversation about platform responsibility. 

In a similar vein, the Center for Democracy and Technology published an article on its site criticizing a proposal introduced by Sen. Graham to combat child sexually abusive material online; and the article and associated tweet exploits distrust for both Graham and Attorney General Barr as reasons to fear the proposal itself.  Sure, I personally think Sen. Graham is the most prominent wuss in America today; and Bill Barr is batshit crazy, spluttering his views that people without religion lack moral judgment, but …

I don’t trust the folks at CDT either because they are ideologues too—OG tech-utopians who just happen to receive significant funding from Google.  (That, and I am very much opposed to child sexually abusive material.) So, whether the harm that needs addressing is child exploitation, revenge porn, online harassment, or mass disinformation campaigns, if we want to cope with any of these still somewhat novel challenges, we just might have to entertain the possibility that a sound policy proposal will come from some party we do not like in a different political context.

The subtle irony in this last example, of course, is that the folks at CDT would probably never entertain the notion that blanket platform immunity has been a major catalyst to creating the alternate realities that people like Graham and Barr now occupy.  That’s not a partisan view—Senator Wyden is probably Big Tech’s greatest ally in Congress, and I unequivocally called him a liar with regard to the CASE Act—it’s the view of someone who, like many Americans, is weary of policy discussions in which outright bullshit is given equal weight to evidence-based theory and practice.  And with respect to Reed Hunt’s observation, this was an inevitable consequence of giving every citizen a megaphone; but platform immunity like Section 230 is the reason Zuckerberg will call outright bullshit like Holocaust denial an “opinion.”  

Carrie Goldberg’s “Nobody’s Victim”: Cyber-Policy is Not an Abstraction

During an exchange on this blog in 2014 with an individual named Anonymous—it must have been a very popular baby name at some point—I was told, “Yes, yes, David, show us on the doll where the Internet touched you, because we all know that all evil comes from there.”  That discussion was in context to the internet industry’s anti-copyright agenda, but the smugness of the response, lurking behind a concealed identity while making an eye-rolling allusion to sexual assault, is characteristic of the tech-bro culture that dismisses any conversation about the darker aspects of digital life.  In fact, I am fairly sure it was the same Anonymous who decided that I had “failed the free speech test” because I wrote encouragingly about the prospect of making the conduct generally referred to as “revenge porn” a federal crime.  

Those old exchanges, conducted in the safety of the abstract, came rushing into the foreground while I read attorney Carrie Goldberg’s book Nobody’s Victim:  Fighting Psychos, Stalkers, Pervs, and Trolls (Plume 2019).  Because Goldberg and her colleagues do not address conduct like “revenge porn” in the abstract, they deal with it as a tangible and terrifying reality.  It is at her Brooklyn law firm where the victims of that crime (and other forms of harassment and abuse) arrive shattered, frightened and suicidally desperate to escape the hell their lives have become—often with the push of a button.  These are people who can show us exactly how and where the “internet touched” them, and Goldberg’s book is a harrowing tutorial in the various ways online platforms provide opportunity, motive, sanctuary, and even profit for individuals who purposely choose to destroy other human beings.  

Nobody’s Victim reads like an anthology of short thriller/horror stories but for the fact that each of the terrorized protagonists is a real person, and far too many of them are children.  These infuriating anecdotes are interwoven with the story of Goldberg’s own transformation from a young woman nearly destroyed by predatory men to become, as she puts it, the attorney she needed when she was in trouble.  The result is both an inspiring narrative of personal triumph over adversity and a rigorous critique of our inadequate legal framework, which needlessly exacerbates the suffering of people targeted by life-threatening attacks—attacks that were simply not possible before the internet as we know it.

Covering a lot of ground—from stalking to sextortion—Goldberg tells the stories of her archetypal clients, along with her own jaw-dropping experiences, in a voice that pairs the discipline of a lawyer with the passion of a crusader. “We can be the army to take these motherfuckers down,” her introduction concludes, and “What happened to you matters,” is the mantra of her epilogue.  It is clear that the central message she wants to convey is one of empowerment for the constituency she represents, but the details are chilling to say the least.

Anyone anywhere can have his or her life torn apart by remote control—i.e. via the web.  All the malefactor really needs is basic computer skills, a little too much time on his hands, and a profoundly broken moral compass.  Psychos, stalkers, pervs, trolls, and assholes are all specific types of criminals in the “Carrie Goldberg Taxonomy of Offenders.”  For instance, the ex-boyfriend who uploads non-consensual intimate images to a revenge-porn site is a psycho, while the site operator, profiting off the misery of others, is an asshole

As Goldberg notes in Chapter 6, by the year 2014, there were about 3,000 websites dedicated to hosting revenge porn.  That is a hell of a lot of guys willing to expose their ex-girlfriends to a range of potential trauma—these include public humiliation, job loss, relationship damage, sexual assault, PTSD, and suicide—simply because the girl/woman broke off the relationship.  This volume of men engaging revenge porn does seem to imply that the existence of the technology itself becomes a motive or rationale for the conduct, but that is perhaps a subject to explore in a future post. 

One theme that comes through loud and clear for me in Nobody’s Victim—particularly in context to the editorial scope of this blog—is that the individual conduct of the psychos, et al is only slightly less maddening than our systemic failure to protect the victims.  As a cyber-policy matter, that means the chronic misinterpretation of Section 230 of the Communications Decency Act as a speech-right protection and a blanket liability shield for online service providers. 

Taking on Section 230

Goldberg’s most high-profile client Matthew Herrick was the target of a disgruntled ex-boyfriend named Juan Carlos Gutierrez, who tried, via the gay dating app Grindr, to get Herrick at least raped, if not murdered.  By creating several Grindr accounts designed to impersonate Herrick, Gutierrez posted invitations to seek him out for rough, “rape-fantasy” sex, including messages that any protests to stop should be taken as “part of the game.”  Hundreds of men swarmed into Herrick’s life for more than a year—appearing at his home and work, often becoming verbally or physically aggressive upon discovering that he was not offering what they were looking for.

With Goldberg’s help, Herrick succeeded in getting Gutierrez convicted on felony charges, but what they could never obtain was even the most basic form of assistance from Grindr.  You might think it would be at least common courtesy for an internet business to remove accounts that falsely claim to be you—particularly when those accounts are being used to facilitate criminal threats to your safety and livelihood.  In fact, the smaller dating app Gutierrez had been using called Scruff eagerly and sympathetically complied with Herrick’s plea for help.  But Grindr told him to fuck off by saying, “There’s nothing we can do.” 

Herrick, through Goldberg, sued Grindr for “negligence, deceptive business practices and false advertising, intentional and negligent infliction of emotional distress, failure to warn, and negligent misrepresentation.”  They lost in both the District Court and in the Second Circuit Court of Appeals, principally because most courts continue to read Section 230 of the CDA as absolute immunity for online service providers.  This cognitive dissonance, which chooses to ignore the fact that a matter like Herrick’s plight is wholly unrelated to free speech, is emphasized in the Electronic Frontier Foundation’s amicus brief filed in the appeal on behalf of Grindr… 

Intermediaries allow Internet users to connect easily with family and friends, follow the news, share opinions and personal experiences, create and share art, and debate politics. Appellant’s efforts to circumvent Section 230’s protections undermine Congress’s goal of encouraging open platforms and robust online speech.

Isn’t that pretty?  But what the fuck has any of it got to do with using internet technologies to impersonate someone; to commit libel, slander, or defamation in his/her name; to deploy violent people (or in some cases SWAT teams) against a private individual; or to get someone fired or arrested—and all for the perpetrator’s amusement, vengeance, or profit?  None of that conduct is remotely protected by the speech right, and all of it—all of it—infringes the speech rights and other civil liberties of the victims.  Perhaps most absurdly, organizations like EFF choose to overlook the fact that the first right being denied to someone in Herrick’s predicament is the right to safely access all those invaluable activities enabled by online “intermediaries.”    

No, Grindr did not commit those crimes, but let’s be real.  What was Herrick asking Grindr to do?  Remove the conduits through which crimes were being committed against him—online accounts pretending to be him.  Scruff complied, and I didn’t feel a tremor in the free speech right, did you?   If we truly cannot make a legal distinction between Herrick’s circumstances and all that frilly bullshit the EFF likes to repeat ad nauseum, then, we are clearly too stupid to reap the benefits of the internet while mitigating its harms.  

Suffice to say, a fight over Section 230 is indeed brewing.  As it heats up, Silicon Valley will marshal its seemingly endless resources to defend the status quo, and they will carpet bomb the public with messages that any change to this law will be an existential threat to the internet as we know it.  There is some truth to that, of course, but the internet as we know it needs a lot of work.  Meanwhile, if anyone is going to win against Big Tech’s juggernaut on this issue, it will be thanks to the leadership of (mostly women) like Carrie Goldberg, her colleagues, and her clients.  

It is an unfortunate axiom that policy rarely changes without some constituency suffering harm for a period of time; and those are exactly the people whose stories Goldberg is in a position to tell—in court, in Congress, and to the public.  If you read Nobody’s Victim and still insist, like my friend Anonymous, this is all a theoretical debate about anomalous cases, largely mooted by the speech right, there’s a pretty good chance you’re an asshole—if not a psycho, stalker, perv, or troll.  And that clock you hear ticking is actually the sound of Carrie Goldberg’s signature high heels heading your way.     

Crying Wolf in the Section 230 Debate

After the 2016 election and news began to break about the amount of fake information and manipulative content that was being financed by various parties, it seemed clear that Section 230 of the Communications Decency Act (1996) would soon be the number-one cyber policy issue in the United States.  Recently, in response to the latest horror show of back-to-back spree shootings—and after it was reported that the El Paso shooter posted his white-supremacist manifesto on the basement-dwellers’ board 8Chan—the subject of platform liability once again blew up across news outlets large and small.

Defenders of the online service provider (OSP) liability shield known as Section 230 insist it is the keystone legislation that makes the internet as we know it possible.  But this only begs the first question for framing any reasonable discussion about the broader issue:  Who said the internet as we know it is ideal?

Naturally, the folks who make billions from the web’s current design think it’s perfect in much the same way those who make billions in the extractive industries think the environment is doing just fine.  And the network of organizations and academics who receive substantial funding from Silicon Valley also like to promote the message that we have Section 230 to “thank” for all the wonderful things the internet does for us.  But how true is that statement?

Even before addressing the statute itself, it is important to remember that 100% of internet services that do not depend upon users publishing content to a public platform have nothing to do with Section 230.  In other words, most e-commerce, navigation, reading news, downloading e-Books, streaming movies and music, making travel reservations, emailing, document sharing, and searching databases and archives, etc. are all benefits of digital life that owe little or nothing to the existence of Section 230. So, when the pundits repeat the imperative, “Save the internet as we know it” this is a tad overwrought because the statute concerns one form of internet use—and not necessarily its best use by a long shot.

Facebook, YouTube, Twitter, WordPress, Reddit, Yelp!, and similar providers are entirely dependent upon user-generated content (UGC); and many platforms that are not wholly dependent on UGC (e.g. The New York Times) still consider it beneficial to host comments by their readers.  Even this blog hosts comments, and I would certainly not want to be liable for inadvertently “publishing” material by a third party that could trigger some cause of action.  And that’s where the Section 230 saga begins—with an anonymous user posting defamatory comments on a financial bulletin board in 1995.

“They were drunk on youth, fueled by greed, and higher than kites.” – Jordan Belfort –

The Martin Scorsese film Wolf of Wall Street, starring Leonardo DiCaprio, dramatizes the memoir of Jordan Belfort, who co-founded the sham investment firm Statton Oakmont in 1989 to engage in pump-and-dump schemes—manipulating stock prices and defrauding investors while making millions for Stratton’s employees.  Belfort and his partner Danny Porush were indicted in 1999 for securities fraud and money laundering, but four years earlier, while still riding high in every sense of the word, they were ballsy enough to sue online service company Prodigy because somebody on the “Money Talk” chat board opined that the Stratton guys just might be criminals.

In Stratton Oakmont v. Prodigy, the Supreme Court of New York held that because the platform exercised editorial control over “Money Talk,” this meant the company was a “publisher” of users’ comments and, therefore, liable for any cause of action stemming from those comments.  (On a side note, I am curious as to how the comment(s) met the standard of defamation when Stratton Oakmont was under almost constant scrutiny by securities officials, but older state court records can be hard to locate, and I cannot find the original complaint.) 

The important point about the Prodigy case for cyber policy is that the fledgling internet industry justifiably freaked out at the decision.  At that time, Congress was still drafting the CDA, which was designed to encourage — not discourage — platform responsibility and moderation.  For instance, among the stated goals of the provision …

(5) to ensure vigorous enforcement of Federal criminal laws to deter and punish trafficking in obscenity, stalking, and harassment by means of computer.

So, how is it that Section 230 came to actually shield sites that either refuse to mitigate some of that conduct, or worse, purposely profit from that conduct?  Well …

The “Good Samaritan Clause”

In response to the Prodigy ruling, early internet developers and entrepreneurs presented a very reasonable complaint:  If the government wants service providers to moderate content, but the courts find that moderation will make them liable for users’ material, nobody will ever invest in the development of internet platforms that rely on user-generated content.  The potential liability is just too great, and nobody can effectively scrutinize millions of inputs every hour.

Thus, the ‘Good Samaritan’ Clause was drafted as a statutory remedy to ensure that good-faith efforts to moderate content would not trigger liability.  Specifically, the statute refers to material that users or providers may consider “obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.” 

In other words, platforms were encouraged to maintain what is often referred to now as “community standards,” and in return, the government made it clear that enforcement of such standards would not render the service provider liable for harmful material posted by their users.  From the statute …

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider,

(2) Civil liability No provider or user of an interactive computer service shall be held liable on account of—

  • (A)   any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
  • (B)   any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).[1]

The Current Reality

That was twenty years ago.  The publicly available internet was new and nobody could be quite sure what kind of platforms would emerge as the industry leaders.  Over the intervening years, the courts largely interpreted Section 230 as a blanket immunity for service providers, often citing the statute as grounds to dismiss almost any complaint against almost any service provider.  Consequently, the platform owners enjoyed the financial bounty that comes from hosting EVERYTHING while characterizing their reluctance to remove even harmful material as an ethical mandate to “protect free speech.” 

The predatory, monetize-everything culture of Silicon Valley, supported by Section 230, is how Facebook wound up supporting (and receiving money from) Russian agents targeting the American electorate with disinformation campaigns.  It is how Cloudflare rationalized hosting 8Chan until this month, when the troglodytic chat board was identified in the mainstream media as a crucible for hate-mongering, and where the El Paso shooter published his pre-assault “manifesto.”

But remember that the statute expressly reminds service providers that it is not their job to protect free speech; and this is just a clue as to how the internet industry, with the help of the courts, turned the intent of Section 230 inside out.  Rather than use the government grant of a broad liability shield to engage in responsible moderation, many platforms asserted 230 as absolute immunity and, therefore, shirked moderation—even where clear harm is being done.  Then, to further aggravate matters, the industry promoted this laissez-faire policy as a public benefit.    

Section 230 is the statutory support for conduct like revenge porn, or (perhaps most ironically) it is the law that enables a website to intentionally trade in defamation as a business enterprise.  That’s right.  A Congressional response to a bad defamation ruling in 1995 now protects a site owner who literally uses defamation as salacious content to generate advertising revenue.  That’s how screwed up the current application of the law is.

If Stratton Oakmont was emblematic of the financial-sector corruption that typified the 1980s and 90s, today’s big-ticket hucksters are the internet companies selling the story that our interests are best served by their unfettered ability to monetize not just every bit of content—but our data profiles.  And while many citizens and lawmakers have lately seen through that charade, the tech-utopians will continue to say that recent calls for greater platform responsibility is a “moral panic,” that we have been overreacting to events since 2016.

As the drumbeat grows louder for revision for of Section 230, the vast and well-funded industry voices will cry wolf once again.  They will once again declare that the internet faces an existential threat, and they will once again not clearly define what they mean by “the internet.”  Because, frankly, the companies that will spend the most capital defending Section 230 are the ones whose platforms are not doing the world nearly so much good as they like to believe.  

The Facebook scandals that have unfolded since early 2016 demonstrate clearly that user-dependent sites like social media platforms are opaque in their operations; and there is no evidence whatsoever to indicate that more online “engagement” has produced a more enlightened, rational, civilized, or thoughtful discourse in the collective management of the Republic.  On the contrary, if anyone thinks social media has not been the primary catalyst driving people apart, I’ve got some old Stratton Oakmont positions to sell you.  So, let’s maintain a little perspective as we approach what seems like an inevitable debate over Section 230 reform.