We Are Far From Skokie:  Free Speech in Cyberspace

“I hate Illinois Nazis.”  – Jake Blues, The Blues Brothers (1980)

I think my first introduction to the complexities of living in a nation with a constitutional right like the First Amendment was in the 7th Grade. Our teacher had the class watch and discuss the film Skokie (1981), a dramatization of the circumstances surrounding the 1977 legal case National Socialist Party of America v. Village of Skokie.  At that time, a group of about 30-50 National Socialist Party members wanted to march, dressed in Nazi-style uniforms, through an Illinois village that was home not only to a large Jewish population, but to quite a number of Holocaust survivors.  Concern was reasonably high among state officials that the community’s promise to rally 12,000 to 15,000 counter-demonstrators would lead to violence.

After the Illinois district, appellate, and supreme courts upheld injunctions barring the Nazi group from marching, the U.S. Supreme Court ultimately held that the state courts had not afforded the petitioners proper appellate review when restricting protected First Amendment rights. Thus, the Nazis would be allowed to march.  As I remember it, the main civics lessons we discussed were that, of course, protecting the rights of free speech and peaceable assembly requires protecting the rights of even the most offensive speakers; but also, that a municipality’s concern that violence may result from an otherwise lawful protest is not grounds for prior restraint of First Amendment exercise. The ACLU defended the rights of the National Socialists in Skokie, just as it represented white-nationalists’ right to protest in Charlottesville a week ago.

Although granted a permit, the Nazi group in 1977 chose not to march in Skokie and instead held a rally in downtown Chicago. Ever since then, and until quite recently, gatherings of these and other hate groups have generally been marginalized. Their speech has been protected, ignored, and mocked. Groups like the KKK would set up their flags, don their ridiculous sheets, spew garbage into megaphones that nobody would bother listening to; and then they’d pack up their impotent little circuses and go home. The “Illinois Nazis” were satirized in the 1980 comedy The Blues Brothers; and that was about as worked-up as we needed to get for the better part of the last four decades. But now, it seems we are far from Skokie.

In response to events in Charlottesville—though clearly Boston was a very different affair—it is possible that state and municipal lawmakers may try to re-legislate the meaning of “inciting violence” when it comes to issuing permits for groups claiming their intention to peaceably assemble. For instance, common sense might suggest that a large crowd showing up with firearms, or weapons of any kind, means that the proposed assembly is not “peaceable.” Thus, city officials should be allowed, with respect to the Constitution, to make reasonable decisions as to what risks they consider tolerable for their police officers to manage.

But that’s physical space. And there is probably a fair body of precedent law upon which city and state legislators can build, if they feel the need to strike a new balance between public safety and the First Amendment relative to a new and more dangerous climate.  But what about cyberspace?

If we set aside the hot-button topic of the president’s tacit endorsements of these groups, the most significant catalyst in amplifying previously-marginalized and fragmented hate-groups into large, gun-wielding mobs has got to be the internet. The internet connects people, right? Except the utopians and dreamers usually talk as though it only connects decent people—or even more naively, that the connection itself is the path toward newfound empathy for one another, which should moderate hatred and division. This can be true, but the opposite results are also plainly manifest.

It turns out the internet is a fertile breeding ground for hatred and division. Anyone can create a platform that connects people whose primary common interest may be hatred of other groups. And it’s not always as blatant as white nationalists hating Jews, people of color, homosexuals, etc. It may even be subdivisions among Jews, people of color, homosexuals, etc. hating on one another, which may be why our political process seems overly bogged down by tribal infighting along lines of identity rather than policies of inclusion—or at least tolerance. The internet seethes with conflicts of egocentrism; and I think it’s fair to say that the web is the ideal intersection for a bunch of misguided, chino-wearing, Tiki-torch-carrying college boys to find common cause with actual flag-waving Nazis trying to provoke a race war.

As was widely reported, events in Charlottesville led GoDaddy to finally boot the Nazi-themed site The Daily Stormer off its hosting servers. The site was then denied hosting by Google, kicked out of Cloudflare’s anonymizing service, refused hosting by other OSPs, and has now allegedly migrated to the dark web. No doubt, many people who were outraged by last weekend’s tragic events applauded these decisions to remove The Daily Stormer from the mainstream; but they were also followed up by notes of concern over the protection of free speech online. As the presumptive ACLU of the internet, the Electronic Frontier Foundation unsurprisingly took the position that speech must never be censored by these private platforms.  In a blog post, the EFF states…

“We at EFF defend the right of anyone to choose what speech they provide online; platforms have a First Amendment right to decide what speech does and does not appear on their platforms. That’s what laws like CDA 230 in the United States enable and protect. 

But we strongly believe that what GoDaddy, Google, and Cloudflare did here was dangerous. That’s because, even when the facts are the most vile, we must remain vigilant when platforms exercise these rights. Because Internet intermediaries, especially those with few competitors, control so much online speech, the consequences of their decisions have far-reaching impacts on speech around the world.”

Yes, the language itself is contradictory and equivocal (i.e. sites should have these rights but not exercise them), but there is no denying that the EFF is highlighting the unprecedented challenge we face with regard to the web and speech. On the one hand, private entities do not have the same constitutional obligations as the state; but this legal technicality does not reconcile the fact that a company the size of Google plays an outsized role in facilitating the means of all speech—from the vile to the profound—in the manner that speech is now conducted. Just like the ACLU defended the Nazis in Skokie—because the principle must be upheld if we are to protect other voices like civil rights leaders—the EFF argues the same rule applies in cyberspace. Allowing OSPs and edge providers to censor speech based on business decisions—and this could include government pressure—is potentially hazardous.

Conversely, these concerns contain a lot of overwrought hypocrisy in which the apparent speech defense masks—and even exacerbates—the larger problem. Because it is the combination of free-speech maximalism and “safe harbor” absolutism, with regard to internet policy, that has produced an oligopoly that now owns the primary conduits of speech itself.  That’s the real danger.  Or as my colleague, Mike Katell puts it

“We have left the barn door open and allowed Silicon Valley to move the popular venues of expression from the community stage and the city street to their proprietary platforms, where they are guided not by constitutional or democratic principles but by terms-of-service strategically designed to maximize profits and offset risk.”

The internet industry, with the help of organizations like the EFF, has consistently swept a million sins (i.e. criminal conduct) under the rug of free speech—not as a matter of principle, but as a matter of revenue growth and competition for market-share. The major platforms manipulate speech all the time in the service of their business interests; and last week, it suddenly became bad for business to host The Daily Stormer. So what does this mean for speech?  Not much I think.

In a world in which private speech on public platforms has ballooned to trillions of interactions per day, the logic of slippery slopes toward censorship must be considered in context to this scale. If The Daily Stormer dies, speech lives. If sites or pages hosting terrorist propaganda are denied service, speech lives. If sites hosting copyright infringing content, selling counterfeit goods, facilitating trafficking, or any other criminal activity are shut down, speech lives. Just like in physical space.

This is to say nothing of the fact that the great, cosmic explosion of speech hasn’t really done democratic principles any favors. As a conveyance of knowledge (that magic ingredient meant to make people more compassionate), the internet also has the capacity to transform reality itself—even documented history—into a choose-your-own-adventure game. Then, because the internet connects people, some ten-thousand flat-earth, tinfoil-hat, conspiracy-theory whack-jobs are no longer dispersed innocuously around the country but will instead coalesce into a tribe that meets daily on TooStupidToBreathe.com. And the next thing we know, they’re a movement requesting a permit to rally in a city park.

As I’ve indicated many times, when the internet activists rush to defend speech in high-profile instances like The Daily Stormer, they consistently overlook a truth that we need to accept:  that laissez-faire internet policies on controlling content has produced—and will always produce—a society where bullies trample speech in ugly and even physically dangerous ways. This cognitive dissonance is reflected in Cloudflare’s wringing its hands over terminating The Daily Stormer account.  In a blog post on the matter, CEO Matthew Prince writes…

“Someone on our team asked after I announced we were going to terminate the Daily Stormer: “Is this the day the Internet dies?” He was half joking, but only half. He’s no fan of the Daily Stormer or sites like it. But he does realize the risks of a company like Cloudflare getting into content policing.”

Maybe they’re trying to answer the wrong question—an immature question. Because I think the answer is no, it’s not the day the “internet dies,” but maybe it’s the day our bullshit, utopian idea of the internet dies. And that’s not a bad thing. Because utopianism is the product of an immature assumption that bad people don’t exist, only bad systems do.  That’s why utopias are always one step away from dystopias. In this regard, not only do OSPs have a right to not facilitate hate, violence, or crime; but it is probably time for the internet industry to accept that taking such action is actually a responsibility for which they need not apologize.

In a broader context, I do not wholly reject the concerns raised by the EFF in this case; but as a matter of policy, I also believe we cannot effectively have this particular debate as though it were a Skokie-era issue. That is simply not the world we inhabit anymore. The internet’s unique capacity to catalyze anti-democratic views, even violent and hate-filled ones that would destroy the First Amendment itself, should factor into the equation when discussing the service providers’ role in protecting speech.

Academics Propose Tweaks to CDA Section 230

When EFF co-founder John Perry Barlow delivered his Declararion of the Independence of Cyberspace in Davos, Switzerland in February of 1996, it was in response to the Telecommunications Act, which had become law just a month earlier. In this speech that would become a manifesto for the industry’s libertarian nature, Barlow proclaimed the web as a place beyond the scope of legislation, a “home of mind” that would be self-governed by the only law people really need—the Golden Rule. Ironically enough, though, a part of the Telecommunications Act known as Section 230 of the Communications Decency Act, is at least one cyber law that the EFF and similar organizations believe is sacrosanct—even to the extent that it should protect those who break the Golden Rule in some very ugly ways.

Section 230 of the Communications Decency Act was designed to support good samaritans, but those who defend its status quo today are often blind to the reality that it provides cover for many bad samaritans, which is the term used in the title of a new paper called The Internet Will Not Break:  Denying Bad Samaritans Section 230 Immunity.  It’s authors, law professor Danielle Keats Citron at the University of Maryland and Benjamin Wittes of the Brookings Institute, focus primarily on influence of the courts, which have consistently applied the Section 230 liability shield so broadly as to distort—if not invert—the original intent of the statute.

The paper begins with a description of the social media site Omegle, whose slogan “Talk to Strangers!” is the antithesis of that rule (right after the Golden one) that our parents used to preach.  As Citron and Wittes put it, “Omegle is not exactly a social media site for sexual predators, but it’s fair to say that a social network designed for the particular benefit of the predator community would look a lot like it.”  The point the authors are making is that the site’s own disclaimers acknowledge their awareness that predators use the platform, which in any non-web context, would be an admission of potential liability for harm that may come to children. But thanks to Section 230 of the CDA, the site can basically say, “Swim in our pond at your own risk. Pirhannas happen.”

As the paper describes, CDA 230 was a Congressional response to the 1995 case Stratton Oakmont v. Prodigy, in which the service provider’s voluntary, good-faith efforts to weed out noxious content from its platform provided the legal basis for the plaintiff to hold Prodigy liable for defamation committed by a third-party user of its services. In other words, the mere fact that Prodigy exercised any control over content meant that it could be held liable for user actions that it could not reasonably have been expected to mitigate. The case cost Prodigy $200 million in damages, signalling reasonable fears among early investors in the internet that they could be the targets of civil or criminal liability suits stemming from the actions of their users.

In response to the Prodigy case—and especially because Congress wanted to encourage ISPs to remove “indecent” (i.e. pornographic) material from their sites—Section 230 was written to provide that actions taken by site managers to remove illegal or unsavory material would not, in a legal sense, make their companies “publishers” vis-a-vis potential liabilities stemming from third-party actions.  “Lawmakers thought they were devising a limited safe harbor from liability for online providers engaged in self-regulation. Because regulators could not keep up with the volume of noxious material online, the participation of private actors was essential,” write Citron and Wittes.

That was 1996.  Today, as many parties have observed, and the authors of this paper further explain, Section 230 paradoxically insulates content and behaviors that can be more toxic than anything it was originally intended to reduce.  “…its overbroad interpretation [by the courts] has left victims of online abuse with no leverage against sites whose business model is abuse,” write the authors. While the internet industry, along with “digital rights” organizations, argue the absolute necessity to maintain the status quo of Section 230, Citron and Wittes counter that the liability shield too easily immunizes bad actors who knowingly allow, or intentionally invite, harmful conduct ranging from harassment and defamation to child sex-trafficking and terrorist propaganda. From the paper …

“A physical magazine devoted to publishing user-submitted malicious gossip about non-public figures would face a blizzard of lawsuits as false and privacy-invading materials harmed people’s lives. And a company that knowingly allowed designated foreign terrorist groups to use their physical services would face all sorts of lawsuits from victims of terrorist attacks. Something is out of whack—and requires rethinking—when such activities are categorically immunized from liability merely because they happen online.” 

The authors emphasize one of my personal gripes whenever any kind proposed enforcement is claimed to be a threat to Free Speech, which is that defenders of Section 230 often overlook the myriad ways in which bad actors stifle the speech of their victims.  For instance, the paper cites the website Dirty.com, which essentially trades in privacy-invading gossip about non-public figures; and if this enterprise were published on paper rather than online, it would have easily been sued out of existence by now.  But thanks to Section 230, “Posts have led to a torrent of abuse, with commenters accusing the subjects of ‘dirt’ of having sexually transmitted infections, psychiatric disorders, and financial problems.  [The Site Owner] has admittedly ‘ruined people sometimes out of fun.’ That admission is not against interest—he knows well that he cannot be sued for his role in the abuse because what users do is on them,” write Citron and Wittes.

In reference to the EFF, the paper quotes the organization as acknowledging that cyber-harassment and related activity does stifle the speech of users; but the authors also highlight the organization as one which treats Section 230 as “an untouchable protection of near constitutional status.”  As reported in this post, the EFF and related groups are so committed to defending the status quo of Section 230 that they have defended its application in the the Backpage case, despite compelling evidence alleging that the site operators knowingly facilitated sex-traffcking of minors.

Something is clearly wrong when a law originally intended to protect children from mere exposure to sexually explicit material may be applied to protect criminals who facilitate the trafficking of children as prostitutes — simply because that facilitation happens online. It is possible that the Backpage case will wind up at the Supreme Court and that the egregious nature of the harm—child sex-trafficking—will be severe enough to recallibrate a judicial reading of the statute’s meaning and intent.  Citron and Wittes view substantial reform from the bench  as a long shot and, therefore, recommend that the courts at least limit the scope of Section 230 defenses to claims related solely to the publication of user generated content.  By contrast, the authors describe the expanding application of the statute thus:

“Many legal theories advanced under the law do not turn on whether a defendant is a “publisher” or “speaker.” Liability for aiding and abetting others’ wrongful acts does not depend on the manner in which aid was provided. Designing a site to enable defamation or sex trafficking could result in liability in the absence of a finding that a site was being sued for publishing or speaking.”

Perhaps more realistically, the authors suggest that some statutory amendment is the only viable solution, and they contend that this can be achieved with a modicum of alteration, leaving intact the liability shield as it was intended for site operators acting in good faith.  For instance, they suggest …

“Mirroring section 230’s current exemption of federal law and intellectual property, the amendment could state, ‘Nothing in section 230 shall be construed to limit or expand the application of civil or criminal liability for any website or other content host that purposefully encourages cyber stalking, nonconsensual pornography, sex trafficking, child sexual exploitation, or that principally hosts such material.’”

In essence, Citron and Wittes argue, this would allow the Twitters and Facebooks of the world to make good-faith efforts to weed out harmful or illegal content and remain protected by Section 230, while immunity would no longer apply to site owners who purposely profit from harmful conduct.  The authors remind readers that this change simply removes the atuomatic immunity (i.e. the opportunity for bad samaritans to file motions to dismiss under Section 230) but in no way alters their rights as defendants in a potential litigation.

On the subject of free speech, the authors reject (as I have many times) the premise that just because nearly all internet activity takes the form of communication, this does inherently place service providers in a unique category universally protected by the First Amendment.

“… to the extent that our proposal is resisted on the grounds that online platforms deserve special protection from liability because they operate as zones of public discourse, we offer the modest rejoinder that while the internet is special, it is not so fundamentally special that all normal legal rules should not apply to it. Yes, online platforms facilitate expression, along with other key life opportunities, but no more and no less so than do workplaces, schools, and coffee shops, which are all also zones of conversations and are not categorically exempted from legal responsibility for operating safely.”

Amen.

EFF Petition Language Used in Fake Emails to the FCC

Photo by Elnur

It’s depressing how often one reads news that makes the United States seem as though we’re reliving the 19th century rather than an enlightened 21st.  With that comment, you might think I’m referring to the current administration (and I certainly could be), but at the moment, I refer to Americans across the political spectrum who seem willing to return to the political tactics of Tammany Hall, albeit in digital form.

On May 31, the National Legal and Policy Center, a D.C. watchdog group, reported that an “initial forensic analysis” of the 2.5 million comments submitted to the FCC on Net Neutrality found that over 465,000 of these were fake. It further states that over 100,000 of these comments used language from the Electronic Frontier Foundation’s “Dear FCC” petitioning tool in support of “Net Neutrality.”  Although the NLPC did not accuse the EFF of processing these false emails, the organization was quick to defend itself as though it had been so accused.  It’s June 1 response states …

“NLPC’s report is false. Not one name, email address, or email domain cited in the report matches to any of the comments that came through EFF’s comment tool.”

Then, missing the point and seizing the moment, the statement proposes …

“Throughout the FCC’s comment process, we’ve seen malicious actors attempt to discredit the process by generating obviously fake comments. Their hope is that they can drown out the voices of the overwhelming majority of Americans who support net neutrality.” 

I am in no way qualified to assert that the EFF had any direct hand in the fake emails, but somebody spammed the FCC; and I have no problem saying that the EFF’s rebuttal is preposterous.  If there is a manipulator trying to sway public opinion “away from Net Neutrality,” it would be easier and more effective to SPAM the FCC with comments in support of that agenda than it would be to plant false data with the hope that its discovery will make the EFF look bad as a tangential way to tip the scale on the neutrality debate.  That’s a convoluted process expecting a lot of the public that, frankly, has bigger fish to fry these days.

It is far more likely that the false emails in this case have been generated by a manipulator who is on the same side as the EFF on the neutrality issue, and the EFF’s failure to denounce the practice is both telling and typical of our times.  In short, it seems that people across the political spectrum have forgotten that American democracy demands that the means are more important than the ends—a discipline that requires vigilance and which may be in regression thanks largely to social media.

Even people who are thoughtful about big issues will naturally respond to memes and headlines with short claims like “X million Americans support Net Neutrality.”  We accept these statements as fact and help to spread them, lending them the credibility of our endorsement. That’s politics via Facebook and Twitter, and whichever side can claim the larger number stands a decent chance of winning the debate regardless of merit.  During the SOPA/PIPA dustup, the EFF and similar organizations crowed loud and long about the apparent overwhelming groundswell of support to defeat those bills. But nobody stopped to wonder how many ineligible voices—kids, trolls, foreign citizens, bots—were represented in those numbers.

Now that there is a full-scale congressional investigation into Russian meddling in the U.S. election and we’re doing a lot of soul-searching into the nature of populism, people are beginning to at least consider the insidious role data manipulation can play via this internet thing that groups like the EFF like to call “the greatest tool for democracy ever invented.”  In this regard, I encourage readers to follow the ongoing investigation by British journalist Carole Cadwaldr into the role of data manipulation in national elections.

The EFF defends the internet writ large as the essential tool for speech and democratic principles, declares that FCC Chairman Pai’s agenda threatens all of that, but then downplays the significance that at least 20% of the emails associated with this very campaign appear to be fake. We’ve seen this brand of politics before from similar groups.

As reported in April of 2016, Fight for the Future’s brag about the 100,000 citizens who responded to the USCO’s request for comments about the DMCA appeared to be at least partly fake based on an experiment conducted by David Lowery and his colleagues. In fact, it appears that the Canadian company Tucows, which is implicated in that same FFTF campaign was also employed in John Oliver’s so-called grassroots campaign “Go FCC Yourself,” which processed such thoughtful comments as “Fuck you Ajit Pai for what you’re are trying to do and I hope you die a horrible painful death with no remembrance to your name …” (I do love how the internet fosters the big ideas.)

I have already proposed in a few recent posts that Net Neutrality is so complex an issue that I doubt many actual citizens who sign these petitions understand what they’re signing anyway.  Add to this a substantial number of fake signatories and geniuses like the one cited above, and I’m at a loss to discern how this politics of cybernetic ballot stuffing is any better than the Breitbart network of gobbledygook posing as news.  I’ll keep an open mind about the FCC and neutrality and watch what happens;  but so far, the only player in this whole story who has actually given me reason to think about the issue, rather than a lame talking point, is Ajit Pai.