Platform Responsibility? How about starting with legal content?

It may be hip these day to talk about platform responsibility, but just a couple years ago, there were no mainstream conversations about how the operations and policies of online service providers might be enabling misinformation, hate speech, propaganda, etc. And while mea culpas from Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey make headlines, and Google tries to pitch the general message that “we’re all in this together,” my more cynical self wonders whether these service providers are just waiting out the news cycle. Waiting until we grow weary of this new discussion, which just happens to be focused on some of the most difficult (if not intractable) questions, like where to draw lines on protected speech.

As alluded to in this post, it is my personal theory that if the major service providers do not change their policies, practices, and rhetoric with regard to illegal content—or support of illegal content—then all this chatter about finding balance in the realm of protected speech is just pandering noise that will soon die down. I do not doubt that Zuckerberg, Dorsey, et al feel personally conflicted about the role their platforms have played in elevating rank divisiveness into the mainstream of political discourse; but when these guys, and other representatives of OSPs say things like “We have to do better,” I can’t help but think of the litany of cases in which internet companies have fought against complying with established legal principles at every turn.

I think of Google fighting a Canadian Supreme Court order in Equustek v. Google to delist links to a counterfeit product supplier. Or Yelp in Hassell v. Bird refusing to remove a review that a court held to be libelous. Or the fact pattern in BMG v. Cox Communications which revealed a systemic policy whereby the OSP avoided compliance with the terms of the DMCA. Or even Viacom v. YouTube, which, though settled without trial, revealed a similar fact pattern of knowingly enabling users to infringe copyrights. Or one of my favorite moments in internet hubris: Reddit’s hand-wringing, apologetic rationale for removing a subreddit that was hosting stolen nude photos of celebrities, who happened to be victims of a hacker.

Not one of the cases alluded to above involves protected speech, yet the responses have all been variations on the same theme: that removing anything from the web can only be a slippery slope toward “censorship.” And despite the fact that these, and other examples, generally entail unprotected, illegal content, we are now suddenly expecting the OSPs to grapple with the more complicated matter of monitoring legal speech and to do…something…as a matter of principle. Don’t get me wrong. A change in attitude would be welcome in so many ways. But if the major platforms cannot first amend their practices with regard to illegal material, I am highly doubtful they will come anywhere near striking the balance that everyone who is now having the “responsibility” conversation says is so essential.

In a panel discussion about platform responsibility hosted yesterday by the Technology Policy Institute, Daphne Keller of the Center for Internet and Society said that she “did not want to return to the copyright wars” in context to the discussion now being had. That’s her prerogative, of course, but copyright infringement is probably the vanguard issue that is most instructive to this moment of internal and external consideration of what platform responsibility actually means. Two decades worth of policies adopted by the major OSPs to first profit from copyright infringement and then seek to reshape copyright law itself in the courts, in academia, and in the public sphere reveal the sense of “responsibility” these companies have felt toward the people they have been exploiting. And of course when the exploited complained they were told they were wrong—that they did not understand the future.

In fact, in yesterday’s panel, I believe it was Keller who alluded to the “false dichotomy” that pits technology against rightholders, but let us not forget the origin of that bullshit narrative. Because it didn’t come from the rightholders. Shall we do a search for all the editorials posted by Techdirt, by EFF, by Lessig and Lefsetz—by copyright critics large and small—who have labeled creative rightholders as technology Luddites “clinging to old models”? That’s not the copyright owner’s narrative, it’s Big Tech’s narrative. So, if there is a false dichotomy, which now demands clarification, it ought to be recanted by the liars who wrote it and are still repeating it. That would be taking responsibility.

Interestingly enough, as a former Associate General Counsel for Google, Keller worked on the aforementioned Equustek case, and in June of 2017, she wrote a blog post for CIS in which she labeled the Canadian Court order that Google remove search results globally as an “ominous” proposal. In simple terms, this was a case in which a counterfeit business infringed Equustek’s trade secrets and then sold knock-off products via multiple sites on the web. Equustek sought and won a court order to remove the counterfeiter’s sites globally from Goolge’s search results.

I cite this example because it is comparatively straightforward. The legit company deserves the business earned by its products; consumers deserve to know what they’re buying and from whom; and there is no speech protection for trade in counterfeit goods. Equustek is also instructive because there is a clear parallel between its prayer for injunctive relief and, say, the motion picture industry’s efforts to have Google delist or demote major pirate sites, which are also not protected speech. Yet, in her 2017 post, Keller sums up the “ominous” nature of the Canadian Court order thus:

“Canada’s endorsement of cross-border content removal orders is deeply troubling. It speeds the day when we will see the same kinds of orders from countries with problematic human rights records and oppressive speech laws. And it increases any individual speaker’s vulnerability to laws and state actors elsewhere in the world. Content hosting and distribution are increasingly centralized in the hands of a few multinational companies – Google, Facebook, Apple, Amazon and Microsoft with their web hosting services, etc. Those companies have local presence and vulnerability to formal jurisdiction and real world threats of arrest or asset seizure in scores of countries.”

Apropos that first sentence, Keller asks rhetorically in the same post, “Can Russia use its anti-gay laws to make search results unavailable to Canadians?” I have two responses to this: the first is No, because the hypothetical, Russian court order would violate both Canadian and American law, which is not the case in Canada’s order to Google in Equustek. Keller, who is really citing Canada’s Michael Geist, falsely alleges that the defendant in Equustek is disseminating protected “speech and information,” which is not the case because the content is infringing and misleading in a manner that could be construed as fraudulent.

My second response is to mention that the policy view Keller seems to advocate—that the rule of law just doesn’t work in cyberspace—is exactly how we arrived at the moment in history when the Russian government is in fact exporting its agenda to the U.S. by using our own speech rights against us on social media. The Geist/Keller example of the Russian court order is pure hypothetical hysteria, but the phenomenon in which paid Russian hackers are fomenting anti-gay, and other hateful sentiments, to ratchet up divisiveness in the U.S. is a verified reality. I happen to think this makes pretty compelling evidence that the rule of lawlessness in cyberspace hasn’t worked out so well, but perhaps that’s just my inner Luddite talking.

So, although the topic of platform responsibility may be trending right now, I maintain some doubt that the OSPs can, or even should, try to protect society against the social and political effects of problematic information. That topic may be what sparked the conversation, but the complexity of that challenge, as it is currently framed, may wind up allowing the service providers to revert to the status quo, in which they moderate almost nothing and monetize almost everything.

Instead, taking on the less-challenging task of actually mitigating illegal content—copyright infringement, harassment, counterfeiting, trafficking, libel, etc.—does not require platform administrators to wade into the murky complexities of moderating speech. So, if they really mean it when they say, “We have to do better,” they can certainly start by complying with reasonable court orders and working with—rather than against—key stakeholders seeking a more lawful internet ecosystem.


Photo by David Crockett

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)