Really, Cory? Then how the hell did we get to now?

“One of the reasons Hamilton found the word democracy so offensive was because he realized that the vast majority of American citizens had not the dimmest understanding of what he was talking about.”  – Joseph Ellis –

Proving that it is easier to be a futurist than a historian, Cory Doctorow contributed a bit of soothsaying to a New York Timesseries the editors describe as follows:

… science fiction authors, futurists, philosophers and scientists write Op-Eds that they imagine we might read 10, 20 or even 100 years from now.

So, Doctorow projected himself ten years into the future, gazed back at the present, and decided that the heedless error we are making is not ignoring climate change or precipitating a completely avoidable war with Iran or even committing mass child abuse at the southern border.  No, what Doctorow considers the potential misstep of the moment could be a decision to amend the policy of zero-liability for web platforms.  That will be the decision we will regret ten years from now:  telling internet companies that they may no longer give people the finger, even when they are directly responsible for injury.  He writes …  

“Bit by bit, the legal immunity of the platforms was eroded — from the judges who put Facebook on the line for the platform’s inaction during the Provo Uprising to the lawmakers who amended section 230 of the Communications Decency Act in a bid to get Twitter to clean up its Nazi problem.”

The only point on which Doctorow and I might agree is that the reaction against Big Tech—including the chatter about regulation and possibly amending the liability shield in Section 230—is that lawmakers, the press, and the public may be responding to the wrong stories.  The efficacy with which Facebook removes unpalatable content is not the major issue. For one thing, they apparently already filter out so much garbage we never see that some of the moderators who dosee it have suffered from PTSD.  Additionally, I would agree with Doctorow that so long as these platforms are used, a certain amount of ugly is going to persist, and we are going to have to learn to deal with that as a society.

But the first order of business in addressing the immunity paradigm for websites is actually fairly low-hanging fruit from a statutory perspective.  As discussed in this post, there are websites that trade in material that, in any other context, would be sued out of existence, yet remain shielded for no reason other than the fact that they operate online.  Sites that purposely host material that is libelous, defamatory, inciting violence, vengeful, infringing, etc. is not comparable to Facebook and Twitter stumbling in their efforts to maintain civil online communities.  And Doctorow is being ridiculous when he lumps it all into one regulatory narrative. 

Individuals and businesses who are injured online through conduct that is unquestionably illegal in real space should not be left to crash into the Section 230 wall when pursuing their legal rights to relief.  It would be a major step in the right direction, and relatively easy legislative work, to make clear that websites that intentionally trade in material, which would ordinarily be actionable, no longer enjoy automatic immunity from litigation.  Done.  No draconian censorship needed, as Doctorow seems to imply. 

Why Not Tweak the Experiment?

Meanwhile, Doctorow can hardly claim that the laissez-faire approach to the internet has produced many of the benefits he seems to think will be lost if we revise our policies.  As I say, it is easier to be a futurist than a historian, and he seems to have forgotten history when he writes, Democracies aren’t strengthened when a professional class gets to tell us what our opinions are allowed to be.”  

Perhaps not what our opinions are allowed to be, but that’s Cory being Cory—sowing fear of censorship rather than considering the more subtle effect the internet has on the valueof opinion-making.  It is not merely chance that the rapid expansion of “internet culture” coincided with the erosion of trust in professionals (i.e. experts), who have some damn good reasons to recommend what our opinions oughtto be on a number of important topics.  The aforementioned shrugging at climate change comes to mind.  

The democratization of opinion-making, leading to the inevitable folly that all opinions have equal value, may be seen by historians as a major catalyst to explain how the putative leader of all democratic republics, the United States, managed to achieve its present state of freefall on such a wide range of policies.  At no time in living memory has the federal government been manned by such a large group of temp-job hacks without a single credential to recommend them for the departments they run. 

The most powerful and extensive military force in the world has not had a legitimate Secretary of Defense since the day seven months ago that one of the most qualified commanders we have resigned because he considered the administration’s policy too incoherent to follow.  And whether they will admit it publicly or not, every serious Republican on the Hill paled at the news of Mattis’s departure but would not say so for fear of being instantaneously thrashed on Twitter by mobs of citizens who haven’t got a clue what they’re talking about.  If the free-for-all internet is so good for society, Cory, how and the hell did we get to now?  

I know what Doctorow and his friends like to say. Don’t blame the internet for the degradation of statesmanship, intelligence, common sense, and decency.  But why not?  The relatively novel introduction of social media, adding an unprecedented scope of direct democracy into the process, has been an experiment.  It is neither logical to assume, nor evident to observe, that the experiment has yielded only positive results.  So, we should not be afraid to adjust the conditions of the experiment.  I can certainly imagine looking back ten years from today and regretting plenty of policy decisions, but I don’t think holding internet companies responsible for their actions is going to be on that list.  

Internet Platforms Above the Law?

Silicon Valley may have done ‘bare minimum’ to help Russia investigation, Senate Intel Committee told … 

That headline from CNN, and which was echoed in several news stories that began appearing late Monday, will elicit no surprise among my friends and colleagues working in IP law, privacy, publicity rights, security, and various other matters of justice in the digital marketplace.  Those of us who believe that the rule of law is not anathema to the internet are used to the major platforms behaving as though they operate in some alternate universe where the laws of old-model, physical humans do not apply.  After all, these companies and their executives were nurtured on exactly that manifesto when the late John Perry Barlow first read his Declaration of the Independence of Cyberspace at Davos in 1996. 

To put it mildly, Barlow was utterly full of shit when he declared the internet to be a “home of mind” that could not, or should not, be governed by the “weary giants of flesh and steel.”  Perhaps he can be forgiven some poetic license in the service of a sincere hope that the internet might truly be an incorporeal space that would help us transcend human folly and connect to one another through our better angels.  But that’s not what happened.  And debates about cyber policy should stop pretending it can still happen.

Fast-forward to the harsh realities of the present, and the “home of mind” is dominated by glorified advertising platforms, easily exploited by the worst kind of malicious actors and which clearly appeal to our lesser—even our profoundly stupid—angels.  And the sad irony is that, far from operating benignly adjacent to physical society, social media platforms have been exploited to infiltrate, corrupt, demoralize, and degrade the foundations of society in very real and very dangerous ways.

Two independent reports commissioned by the Senate Select Committee on Intelligence investigated the St. Petersburg-based Internet Research Agency (IRA), the professional troll farm erected for the sole purpose of inflaming political discord in the United States and other liberal democracies around the world.   The reports reveal that disinformation on all major platforms was, and still is, more widespread than initially believed; and they describe the methods by which specific groups like African Americans were targeted in an effort to dissuade voter turnout for the 2016 election.

I plan to read both reports and follow up, but for now, I thought it worth highlighting the detail that the reports’ authors allege that the major platforms were far less cooperative than one might hope given the gravity of the circumstances.   As The Washington Post quotes

“Unfortunately, Google made the unusual choice to provide data to the Committee in nonmachine‐readable format.  The ads data was provided in lengthy PDF format whose pages displayed copies of information previously organized in spreadsheets (Google could have provided the original spreadsheets in CSV or JSON files).”

Think about that one.  Google stifling the use of computers as the U.S. Senate tries to better understand exactly how a foreign and hostile power has been working to fracture the American democratic process.  Further, while skimming the report submitted by New Knowledge, I caught the statement that begins, “Regrettably, it appears that the platforms may have misrepresented or evaded in some of their statements to Congress.”  

In this regard, I was intrigued by the strident tone lately adopted by Senator Ron Wyden in response to Silicon Valley’s less than forthright conduct in these investigations.  Vowing to pass “legislation with teeth,” Wyden has proposed a new consumer privacy bill aimed at restricting what these platforms may do with user data, particularly with respect to the manner in which that data may be leveraged to target disinformation about politics and policy issues.  Further, the proverbial “teeth” in Wyden’s bill would impose substantial penalties, including potential imprisonment for executives, for failing to provide honest and complete testimony to Congress.

For my colleagues who work in copyright advocacy, Wyden has not exactly been the rule-of-law representative when it comes to holding Silicon Valley accountable.  But perhaps the thinking will change as the senator and his colleagues must now address the many indisputable ways in which a liability-free internet industry has, quite possibly, done more harm than good for American democracy.

Meanwhile, despite mounting evidence that the major social platforms are more often a home of mindlessness than mind, Barlow’s Declaration remains the cosmic background noise still ringing in the heads of too many defenders of what we generically call “the internet.”  Whether it’s the Electronic Frontier Foundation or Techdirt or MEP Julia Reda’s anti-copyright campaign in the EU or the Internet Association or even the American Library Association, one can still hear the strains of a misguided faith in a pure internet, unsullied by the taint of law, in the rhetoric deployed against almost any policy that might demand platform responsibility.

For far too long, a false premise undermining copyright enforcement specifically—and almost all other types of enforcement generally—has been that it is better to allow harmful or illegal content to remain online than to risk censoring even a micro-byte of protected speech.  But that premise is, paradoxically enough, a pre-digital-age idea and a blind allegiance to Barlow’s naive cyber-utopianism.  It is a laissez-faire approach, which casually ignores the new reality in which an unfettered amount of harmful or illegal content continues to undermine the values it claims to uphold.  

After all, if an American inadvertently shares a political meme that was written by a malicious actor in St. Petersburg—and whose goal is to weaken global democracy—can anyone honestly say that free speech is fulfilling its purpose in that moment?  This is just one reason why, about a month ago, I personally stopped most activity on Facebook:  because I’m not sure it’s possible to avoid feeding that particular cancer.  At the same time, it is notable that we do not even need bad actors as dramatically well-organized as the IRA to weaponize disinformation on a wide range of issues; but that’s a subject for a different post. 

So, I look forward to reading the Senate-commissioned reports; but for now, I thought it worth noting the allegations that the major platforms are stonewalling and obfuscating in these investigations—still behaving as though they operate outside the rule of law.   Of course. the tragically ironic twist to Barlow’s vision of cyberspace as separate from, and elevated above, “weary” reality is that our present reality too often resembles the craven, mean-spirited, and willfully misinformed cyber-world of social media.

Robot image source by digitalstormcinema

Platform Responsibility? How about starting with legal content?

It may be hip these day to talk about platform responsibility, but just a couple years ago, there were no mainstream conversations about how the operations and policies of online service providers might be enabling misinformation, hate speech, propaganda, etc. And while mea culpas from Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey make headlines, and Google tries to pitch the general message that “we’re all in this together,” my more cynical self wonders whether these service providers are just waiting out the news cycle. Waiting until we grow weary of this new discussion, which just happens to be focused on some of the most difficult (if not intractable) questions, like where to draw lines on protected speech.

As alluded to in this post, it is my personal theory that if the major service providers do not change their policies, practices, and rhetoric with regard to illegal content—or support of illegal content—then all this chatter about finding balance in the realm of protected speech is just pandering noise that will soon die down. I do not doubt that Zuckerberg, Dorsey, et al feel personally conflicted about the role their platforms have played in elevating rank divisiveness into the mainstream of political discourse; but when these guys, and other representatives of OSPs say things like “We have to do better,” I can’t help but think of the litany of cases in which internet companies have fought against complying with established legal principles at every turn.

I think of Google fighting a Canadian Supreme Court order in Equustek v. Google to delist links to a counterfeit product supplier. Or Yelp in Hassell v. Bird refusing to remove a review that a court held to be libelous. Or the fact pattern in BMG v. Cox Communications which revealed a systemic policy whereby the OSP avoided compliance with the terms of the DMCA. Or even Viacom v. YouTube, which, though settled without trial, revealed a similar fact pattern of knowingly enabling users to infringe copyrights. Or one of my favorite moments in internet hubris: Reddit’s hand-wringing, apologetic rationale for removing a subreddit that was hosting stolen nude photos of celebrities, who happened to be victims of a hacker.

Not one of the cases alluded to above involves protected speech, yet the responses have all been variations on the same theme: that removing anything from the web can only be a slippery slope toward “censorship.” And despite the fact that these, and other examples, generally entail unprotected, illegal content, we are now suddenly expecting the OSPs to grapple with the more complicated matter of monitoring legal speech and to do…something…as a matter of principle. Don’t get me wrong. A change in attitude would be welcome in so many ways. But if the major platforms cannot first amend their practices with regard to illegal material, I am highly doubtful they will come anywhere near striking the balance that everyone who is now having the “responsibility” conversation says is so essential.

In a panel discussion about platform responsibility hosted yesterday by the Technology Policy Institute, Daphne Keller of the Center for Internet and Society said that she “did not want to return to the copyright wars” in context to the discussion now being had. That’s her prerogative, of course, but copyright infringement is probably the vanguard issue that is most instructive to this moment of internal and external consideration of what platform responsibility actually means. Two decades worth of policies adopted by the major OSPs to first profit from copyright infringement and then seek to reshape copyright law itself in the courts, in academia, and in the public sphere reveal the sense of “responsibility” these companies have felt toward the people they have been exploiting. And of course when the exploited complained they were told they were wrong—that they did not understand the future.

In fact, in yesterday’s panel, I believe it was Keller who alluded to the “false dichotomy” that pits technology against rightholders, but let us not forget the origin of that bullshit narrative. Because it didn’t come from the rightholders. Shall we do a search for all the editorials posted by Techdirt, by EFF, by Lessig and Lefsetz—by copyright critics large and small—who have labeled creative rightholders as technology Luddites “clinging to old models”? That’s not the copyright owner’s narrative, it’s Big Tech’s narrative. So, if there is a false dichotomy, which now demands clarification, it ought to be recanted by the liars who wrote it and are still repeating it. That would be taking responsibility.

Interestingly enough, as a former Associate General Counsel for Google, Keller worked on the aforementioned Equustek case, and in June of 2017, she wrote a blog post for CIS in which she labeled the Canadian Court order that Google remove search results globally as an “ominous” proposal. In simple terms, this was a case in which a counterfeit business infringed Equustek’s trade secrets and then sold knock-off products via multiple sites on the web. Equustek sought and won a court order to remove the counterfeiter’s sites globally from Goolge’s search results.

I cite this example because it is comparatively straightforward. The legit company deserves the business earned by its products; consumers deserve to know what they’re buying and from whom; and there is no speech protection for trade in counterfeit goods. Equustek is also instructive because there is a clear parallel between its prayer for injunctive relief and, say, the motion picture industry’s efforts to have Google delist or demote major pirate sites, which are also not protected speech. Yet, in her 2017 post, Keller sums up the “ominous” nature of the Canadian Court order thus:

“Canada’s endorsement of cross-border content removal orders is deeply troubling. It speeds the day when we will see the same kinds of orders from countries with problematic human rights records and oppressive speech laws. And it increases any individual speaker’s vulnerability to laws and state actors elsewhere in the world. Content hosting and distribution are increasingly centralized in the hands of a few multinational companies – Google, Facebook, Apple, Amazon and Microsoft with their web hosting services, etc. Those companies have local presence and vulnerability to formal jurisdiction and real world threats of arrest or asset seizure in scores of countries.”

Apropos that first sentence, Keller asks rhetorically in the same post, “Can Russia use its anti-gay laws to make search results unavailable to Canadians?” I have two responses to this: the first is No, because the hypothetical, Russian court order would violate both Canadian and American law, which is not the case in Canada’s order to Google in Equustek. Keller, who is really citing Canada’s Michael Geist, falsely alleges that the defendant in Equustek is disseminating protected “speech and information,” which is not the case because the content is infringing and misleading in a manner that could be construed as fraudulent.

My second response is to mention that the policy view Keller seems to advocate—that the rule of law just doesn’t work in cyberspace—is exactly how we arrived at the moment in history when the Russian government is in fact exporting its agenda to the U.S. by using our own speech rights against us on social media. The Geist/Keller example of the Russian court order is pure hypothetical hysteria, but the phenomenon in which paid Russian hackers are fomenting anti-gay, and other hateful sentiments, to ratchet up divisiveness in the U.S. is a verified reality. I happen to think this makes pretty compelling evidence that the rule of lawlessness in cyberspace hasn’t worked out so well, but perhaps that’s just my inner Luddite talking.

So, although the topic of platform responsibility may be trending right now, I maintain some doubt that the OSPs can, or even should, try to protect society against the social and political effects of problematic information. That topic may be what sparked the conversation, but the complexity of that challenge, as it is currently framed, may wind up allowing the service providers to revert to the status quo, in which they moderate almost nothing and monetize almost everything.

Instead, taking on the less-challenging task of actually mitigating illegal content—copyright infringement, harassment, counterfeiting, trafficking, libel, etc.—does not require platform administrators to wade into the murky complexities of moderating speech. So, if they really mean it when they say, “We have to do better,” they can certainly start by complying with reasonable court orders and working with—rather than against—key stakeholders seeking a more lawful internet ecosystem.


Photo by David Crockett