Trump’s Blocking Twitter Followers Unconstitutional Says Court

On Wednesday, a federal court for the Southern District of New York held that President Trump violated the First Amendment when he and his Social Media Director Daniel Scavino blocked users on Twitter because they were critical of the President and/or his policies via the @RealDonaldTrump account. The story caught my attention—not only as a citizen who wants a president to both respect the Constitution and have the backbone to endure a little criticism—but also because I wondered if the court’s opinion might state or imply that the Twitter platform as a whole is a public forum vis-a-vis the First Amendment. The short answer is No.

Readers may remember when “digital rights” groups swooned over the opinion in Packingham v. North Carolina, finding the State had overreached in barring internet access to registered sex-offenders, and in which Justice Kennedy described the internet as one of the “most important places for the exchange of views.” The digerati even speculated that the opinion in Packingham might imply that Section 512(i) of the DMCA, requiring account termination for repeat copyright infringement, could be held unconstitutional.

The major internet platforms have long overstated their obligation to the First Amendment on behalf of users—usually citing our free speech as the reason to keep their digits off all user-uploaded content, even if the content is illegal or otherwise harmful. This posture is based on the “neutral platform” principle, which has less to to with free speech and more to do with avoiding corporate liability for actionable uses of their platforms.

For several years, the public generally bought into the “neutral platform” concept until the Russian meddling story broke, and then everyone remembered that, in fact, social media platforms are private companies free to exercise editorial control over content without implicating the First Amendment. And in a recent twist, The Guardian reports that Facebook is seeking to have a lawsuit dismissed on the grounds that, get this, it’s a publisher with the right to edit content. Stay tuned on that one.

Still, the question remains, from our perspective as users, as to exactly when a social media platform constitutes a public forum in a constitutional sense and when it doesn’t; and this recent decision involving the @RealDonaldTrump Twitter account is among the first opinions to provide some answers.

Judge Naomi Reice Buchwald awarded the plaintiffs declaratory relief stating that President Trump’s blocking them from following his Twitter account was an abridgment of their First Amendment rights—but only under a very narrow analysis in which the @RealDonaldTrump account constitutes a public forum. In this case, the forum arises from a combination of two sets of facts: first, that the once-personal account of Donald Trump is now used to make official statements by the President of the United States; and second, that only a follower of a Twitter account can interact directly with the account-holder’s tweets by composing tweets that are then visible to all other users in that specific context. As stated in the opinion …

“The audience for a reply extends more broadly than the sender of the tweet being replied to, and blocking restricts the ability of a blocked user to speak to that audience.”

As I said, it’s a very narrow standard defining this particular account as a public forum, and the opinion even calls the injury done by blocking di minimis but also states that di minimis abridgment of speech is still unconstitutional. The defense’s arguments that a blocked user can still read the Twitter feed of @RealDonaldTrump and remains free to criticize the president in any other manner were not persuasive that a First Amendment violation did not exist.

Judge Buchwald also noted in her opinion that a public official who uses social media for purely personal communications would be free to block users without implicating the First Amendment. Public officials are still entitled to private lives, including the right to ignore or avoid critics or haters—even on a publicly-visible, but privately-used, social media feed.

The defense also sought to argue a separation challenge—that the court does not have jurisdiction in this case over the Executive, but countering this, Judge Buchwald states that an order to unblock these users does not, “direct the President to execute the laws in a certain way, nor would it mandate that he pursue any substantive policy ends.” Instead, the court affirms that the President must comply with the Constitution he took an oath to protect and defend.

That said, in order to steer a wide path away from any separation conflicts, Judge Buchwald stopped short of issuing an injunction (an order) to unblock the users and instead issued a declaratory judgment (more like a recommendation) that the President has violated the First Amendment. According to Newsweek yesterday, Trump has so far defied the court, and users remain blocked.

Assuming the Republic survives this mess and social media remains something we all use in the foreseeable future, this case may prove instructive as a first step in defining when use of these platforms legitimately implicate the First Amendment. I suspect the answers will continue to be narrow—that it will not suffice return to the over-broad assumption that platforms are merely neutral hosts of protected speech because it seems clear that such absolutes do not apply.

This has implications for cyber policy going forward. As many colleagues have repeated—and are only now being heard—the pecuniary interests of web platforms trained society to obliterate boundaries like consent and decency—to say nothing of truth. And there is an extent to which the current President’s apparently cult-like use of Twitter to speak only to admirers is a byproduct of that same folly. Yes, in this instance, I believe the President should unblock those users in deference to the court’s opinion; but in general we should also should take note that the internet industry’s chronic appeals to our free speech as their liability defense is as legally untenable as it is morally objectionable.

Senate Hearings:  A Sea Change for Social Media Companies & Users

Yesterday afternoon the Senate Judiciary Committee held a hearing entitled:  “Extremist Content and Russian Disinformation Online:  Working with Tech to Find Solutions.” Representing the social media companies were Colin Stretch, General Counsel at Facebook; Sean Edgett, Acting General Counsel at Twitter; and Richard Salgado, Director of Law Enforcement And Information Security at Google.

The news to come out of this hearing will not compete with the blockbuster revelations produced the day before by Special Prosecutor Robert Mueller; but in the long run, it may prove to be more important.  Because regardless of who in the current administration may yet be implicated in Russia’s disinformation campaign aimed at the United States, the matters of greatest significance are that it happened, the ways in which it happened, and that is still happening.  And it’s not all about Russia.

Some Background

Shorthand terms like “Russian hacking” do not properly describe the nature of what’s going on; and the significance of what’s going on should be understood as separate from any collusion that may or may not have existed between Russia’s agents and the Trump campaign.  In a nutshell, what the Russian-based Internet Research Agency engaged in had less to do with backing a particular candidate and far more to do with spreading mass dis-information to exacerbate divisiveness among the American electorate.  And there is no better way to achieve this disruption than by using social media platforms.

The estimates reported state that 126 million Americans were exposed to paid, targeted messaging used to spread false and emotionally-charged rumors, some of which favored one candidate or another, but all of which was designed to foment political discord and volatility. As the opening testimony of Clint Watts of the Foreign Policy Research Institute stated in Part II of these Committee hearings, “Terrorists’ social media use has been acute and violent, but now authoritarians have taken it to the next level using social media more subtly to do something far more dangerous – destroy our democracy from the inside out through information campaigns designed to pit Americans against each other.”

That theme—pitting Americans against each other—cannot be overstated in this story, and I’ll return to it shortly.

A Taste of the Hearing

Coming to terms with the negative effects the “information age” can have on democracy is a reckoning long overdue, and yesterday was the first time in my experience that representatives of Silicon Valley were compelled to stifle their utopian rhetoric and admit that their products yield unintended and poisonous consequences.  In fact, early in the hearing, Senator Sheldon Whitehouse (D-RI) directly asked the three witnesses if they were going drop the “we’re just a neutral platform” posturing and accept that they have an active role to play in addressing the matters before the Committee.  All three answered the senator in the affirmative.

That in itself is big news.  The Committee’s unwillingness to accept the shrug of “neutrality” from these companies has implications for cyberlaw that go beyond addressing the immediate issue of foreign powers meddling in US elections.  For instance, it is worth remembering that while Mr. Salgado was promising that Google will take affirmative action and not hide behind a veil of neutrality regarding issues addressed in this hearing, parent company Alphabet’s juggernaut of lobbyists and PR outlets are presently trying to kill the anti-sex-trafficking bill SESTA on the grounds that it would weaken the neutral position of their platform.

There were a few awkward moments between the Committee and the witnesses regarding broader questions about the capabilities of the platforms.  Those of us who advocate certain legal boundaries online (like copyright enforcement) are used to the shell game in which the platforms boast about their capabilities to advertisers one moment (e.g. the ability to perform granular-level, targeted marketing) and in the next moment, state contradictorily that they cannot weed out toxic or illegal content because they “can’t police the internet.”

Among the highlights on this theme was Senator Al Franken’s (D-MN) entertaining inquiry directed at Facebook’s Mr. Stretch in which he asked how, with the company’s extraordinary computing capacity, it failed to “connect two dots” and consider that “American” political ads paid for with rubles might be a reason to doubt the nature of the advertiser.  In a related exchange with Mr. Salgado on the subject of Google’s capacity to weed out foreign-based political ads, Sen. Franken felt the response was too internal-policy focused and reminded the witness, “You know it’s illegal for any foreign money to be spent in our election process, right?”

These hearings mark the first time that I can remember any representative of the major platforms stating with so little equivocation that they can, will, and should implement steps to mitigate harmful content on their platforms.  Doubts were raised, however, by some members. Senator John Kennedy (R-LA) told Mr. Strech pointedly that he simply doesn’t believe Facebook can effectively vet over five million ads per month; and Senator Patrick Leahy (D-VT) accused all three platforms of responding too slowly, of missing opportunities and warning sings, and of hosting toxic content that is still online right now.

Although some Committee members raised concerns about First Amendment protections—in fact, Senator Ted Cruz (R-TX) cited incidents of alleged censorship of conservative views by the platforms themselves—neither any Committee Member of either party, nor any of the three witnesses, reiterated the generalization that removing illegal or harmful content from the platforms was fundamentally incompatible with the protection of free speech.  To the contrary, there seemed to be a very clear consensus that the manner in which these tools have been—and may continue to be—manipulated by bad actors is so harmful to democracy itself that, in context, free speech becomes a weapon of self-destruction. And that brings us back to that underlying theme and the real significance of what the Russian “hackers” did:  pit Americans against each other.

A New Kind of Literacy

I opined in a recent post that a new kind of media literacy is needed for the digital age.  Because no matter what Congress can legislate, and no matter what actions the platforms may take, people themselves are going to have to be more vigilant about the content they believe to be true, let alone share.  The mistaken expectation that the internet would be a kind of turbo-booster for democratic values comes from a reasonable, if somewhat elitist, assumption.  The theory was that if people have access to information, unfiltered by the influence of manipulators and monied gatekeepers, the collective wisdom of a fundamentally benevolent society would galvanize core democratic principles.  The manipulators would be powerless in such a fact-rich environment.

These assumptions completely overlooked some fundamental realities:  1) the platforms themselves can be used by a wide range of manipulators to manufacture false information; 2) false information that jibes with pre-conceived bias is almost impossible to recognize as false because; 3) people are driven more by emotion than by information.  The fault of the technologists, whose expertise is data, was to assume that information builds community—or at least to sell that idea.  But the truth is almost always just the opposite, even without propagandists hijacking reality.

The Opposite of Social Media

As an example of the limits of social media, I think about the story of Daryl Davis, who was featured by several news organizations shortly after the riots in Charlottesville.  Davis, a black blues musician, is responsible for over 200 men quitting the Ku Klux Klan—a journey that began, humorously enough, when a white man in a bar complimented him by saying he’d never heard “a black guy who could play piano like Jerry Lee Lewis.”  Davis’s explanation that Lewis learned everything he knew from black musicians led to a cordial conversation and the discovery that the white guy was a member of the KKK.  Thus began Davis’s decision to travel the country, meet other Klan members, and write a book about his experiences.  Along the way, many of the friends he made abandoned the organization and gave Davis their robes as penitential offerings.

Now, imagine if Davis’s first encounter with that first Klan member had been through the cold portals of social media rather than in person and through the shared experience of quintessentially American music. Add to that the trove of “information” the white guy could link to “proving” the genetic inferiority of Davis’s race. Can anyone doubt that the most likely outcome of this online exchange would be a hardening of the white guy’s racism (and perhaps a hardening of Davis’s feelings as well)?

Nothing in such and exchange would have to qualify as hate speech or any other content that would even get on the radar of the issues discussed in yesterday’s Senate hearing. It’s the kind of exchange that happens all the time—just two Americans being driven further apart by the mechanisms of a platform whose corporate mission is to “build community.”  That paradox is something the folks at Facebook et al—and those of us who use these platforms—are going to have to reconcile no matter what Congress does.


Image by alexlmx

Online Harassment & The Internet Experiment

In last weekend’s New York Times Magazine, staff writer Jenna Wortham asks Why Can’t Silicon Valley Fix Online Harassment? Citing some alarming statistics from a 2104 Pew Research study, she writes …

“… 40 percent of adult internet users have dealt with online harassment. And those numbers go up among young adults (especially women) and nonwhite users. Women are significantly more likely than men to report being stalked or sexually harassed on the internet, and 51 percent of African-Americans and 54 percent of Hispanics said they had experienced harassment, compared with 34 percent of whites.”

Online harassment is no joke. At scale, it can be emotionally devastating and legitimately terrifying for victims. It has been known to cause economic and social harm and to catalyze both physical assault and suicides. While we extoll the virtues of connectedness fostered by an “open” internet, harassment is the mutant howling in the basement nobody wants to talk about. And Wortham rightly observes that the monster is a byproduct of Silicon Valley’s unique blend of new-money libertarianism built on a foundation of faded, hippie idealism—incongruous doctrines that were, for many, synthesized in the manifesto A Declaration of the Independence of Cyberspace, delivered by John Perry Barlow at Davos in 1996.

But if online harassment is a disease and the first step to recovery is admitting there’s a problem, then perhaps that first step is to properly contextualize Barlow’s Declaration as the naive and petulant outburst it was. A moment of whimsy rather than the foundation for a sustainable, or even humane, proposal. Nevertheless, the belief that cyberspace remains some magical realm beyond the normal boundaries of society continues to delay rational discourse on any number of problems specifically caused or exacerbated by the technology.

Although harassment will occur on a public forum like Twitter, it often begins by brewing on a site like 4Chan, a “discussion” board populated by mostly males from pre-teen to mid-30s, who, in every sense of the cliché, have too much time on their hands. And although everyone on 4Chan is anonymous—it is in fact the site where the hacktivist group Anonymous began—they might collectively be seen as that mutant creature borne by Barlow’s Declaration. Like most adolescents, the thing they seem to hate most is being told what to do—hence the the harassment-filled shitstorm known as “Gamergate.”

Although I would never condone harassment, I think I understand how at least some of it starts. This blog has very occasionally elicited accusations of racism or sexism because there are people in the world who will filter literally any topic through such lenses, even when there is no rational basis for doing so. If I were an adolescent who spent inordinate time among other adolescents in a forum like 4Chan, the temptation to retaliate against these absurd accusations by weaponizing overt racism or sexism—at least for my own amusement—could be very great. And once it begins, it’s easy enough for a little spark to become a flash fire.

In all likelihood, the majority of trolls out there are young men who harass for the lulz—an expression derived from the acronym LOL. Think of this class of trolls as easily excitable chimpanzees who will gather around a target of ridicule and pile on, but who are also easily bored and distracted by the next shiny object. So, if the target of their ridicule or cruelty doesn’t respond, this group usually returns to its natural state of online gaming and metaphorically throwing feces at one another.

But if the target of their ridicule does respond, this only increases the opportunity for lulz, which means the chimps remain engaged and incentivized to keep raising the bar of harassment of their target. Hence, the truly hideous invocations of rape and murder—complete with photographic depictions of these acts—that are so commonly employed by harassers of this nature. From this phenomenon comes the common-sense directive Don’t Feed the Troll, which is fine up to a point but can also be a form of victim-blaming as the volume and virulence of the harassment increases.

Wortham notes the apparent futility of “counterspeech,” which she describes as “the practice of bystander intervention that overpowers aggressors in an attempt to deter them.” I’m not at all surprised the EFF endorses this self-governing tactic as a “solution,” seeing as the organization (co-founded by Barlow) remains mesmerized by the fallacy that the internet naturally enables good to triumph over evil as long as pesky rules don’t get in the way.

I’m also not surprised that the two organizations Wortham highlights as designed to deploy “counterspeech” seem to be finding the method ineffective. If the general rule of thumb is Don’t Feed the Troll, then an attempt to surround a victim in a barrier of Twitter-hugs is like dipping her in chocolate and Cheetos. It’s only going to whip the trolls into a feeding frenzy. As stated above, it is important to remember that a large segment of the people who engage in this kind of harassment HAVE NOTHING BETTER TO DO. This is a hobby for many a young male, who really needs to get a life; and it is therefore difficult for people who do have lives to outlast or overwhelm the harassers.

Presumably, there are casual harassers as well—people who don’t spend time seething on 4Chan, but who obey an impulse to add their 140 characters of vitriol when they see a trend piling onto a target they don’t like or who has pissed them off. And I suppose we have to assume at this point that people can be harassed by bot swarm as well. But the fact that a real human being can be remotely and anonymously hounded to the point of being harmed or harming herself is a very real problem we have yet to confront in any substantive way. What is the responsibility of one voice in a million that feeds the proximate cause of a suicide? I don’t know, but it sure as hell belies Barlow’s dreamy assumptions.

Of course the thesis question Wortham asks is this: Can Silicon Valley do anything about online harassment? In theory, why not? As stated in several other posts, the internet companies are telling a half truth at best when they claim to have free speech obligations. They may wish to support free speech, and that’s fine, but the individual platforms are no more bound by the First Amendment than a retail store or restaurant in the physical world. Wortham is right to view the deciding factors as both ideological and financial, and in that order—a story of what happens when hippies become billionaires.

The policy positions and Terms of Service that still flow from Barlow’s Declaration have made the internet into a computer model of a social experiment which—to an extent—places people in philosopher John Locke’s hypothetical state of nature. Like Locke, the model then asks whether or not Man really needs to make a bargain with the State in order to protect his sovereignty as an individual. In 1996, Barlow declared the internet to be a “home of Mind,” a place where the legal conventions of statehood (namely law) have no purpose—an ideal based on the assumption that people are basically good and law is exclusively coercive.

But in 1689, in his Second Treatise of Government, Locke argued that Man in a state of nature (i.e. without government) is more free but also more vulnerable to human predators, who may enslave him, kill him, or take his property. Hence, the bargain one makes with the State is to trade as little freedom as possible in exchange for relative security. Thus, if a woman in a Target store were harassed in Twitter style (i.e. told by a swarm of men that they hope she gets raped and killed), the security and police who will soon arrive on her behalf are a manifestation of that Lockean bargain.

In principle, the major platform owners can take steps to mitigate online harassment, and they will likely discover this ability the moment there is a financial incentive to do so. But in the meantime, we might learn something from the computer model, which reveals exactly what can happen in a stateless and lawless “community.”

Consider the rash of hate crimes and threats following the election—all presumably committed by people who believed Trump’s presidency granted them permission to act upon latent antipathy. But how many Swastikas have been spray-painted by committed Nazis and how many by teenagers doing it for the lulz? Hard to say, but it’s likely that both motivations are present and that this is one way in which real life comes to resemble cyberspace rather than the other way around. And that may prove to be the most dangerous phenomenon of all.