Platform Responsibility? How about starting with legal content?

It may be hip these day to talk about platform responsibility, but just a couple years ago, there were no mainstream conversations about how the operations and policies of online service providers might be enabling misinformation, hate speech, propaganda, etc. And while mea culpas from Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey make headlines, and Google tries to pitch the general message that “we’re all in this together,” my more cynical self wonders whether these service providers are just waiting out the news cycle. Waiting until we grow weary of this new discussion, which just happens to be focused on some of the most difficult (if not intractable) questions, like where to draw lines on protected speech.

As alluded to in this post, it is my personal theory that if the major service providers do not change their policies, practices, and rhetoric with regard to illegal content—or support of illegal content—then all this chatter about finding balance in the realm of protected speech is just pandering noise that will soon die down. I do not doubt that Zuckerberg, Dorsey, et al feel personally conflicted about the role their platforms have played in elevating rank divisiveness into the mainstream of political discourse; but when these guys, and other representatives of OSPs say things like “We have to do better,” I can’t help but think of the litany of cases in which internet companies have fought against complying with established legal principles at every turn.

I think of Google fighting a Canadian Supreme Court order in Equustek v. Google to delist links to a counterfeit product supplier. Or Yelp in Hassell v. Bird refusing to remove a review that a court held to be libelous. Or the fact pattern in BMG v. Cox Communications which revealed a systemic policy whereby the OSP avoided compliance with the terms of the DMCA. Or even Viacom v. YouTube, which, though settled without trial, revealed a similar fact pattern of knowingly enabling users to infringe copyrights. Or one of my favorite moments in internet hubris: Reddit’s hand-wringing, apologetic rationale for removing a subreddit that was hosting stolen nude photos of celebrities, who happened to be victims of a hacker.

Not one of the cases alluded to above involves protected speech, yet the responses have all been variations on the same theme: that removing anything from the web can only be a slippery slope toward “censorship.” And despite the fact that these, and other examples, generally entail unprotected, illegal content, we are now suddenly expecting the OSPs to grapple with the more complicated matter of monitoring legal speech and to do…something…as a matter of principle. Don’t get me wrong. A change in attitude would be welcome in so many ways. But if the major platforms cannot first amend their practices with regard to illegal material, I am highly doubtful they will come anywhere near striking the balance that everyone who is now having the “responsibility” conversation says is so essential.

In a panel discussion about platform responsibility hosted yesterday by the Technology Policy Institute, Daphne Keller of the Center for Internet and Society said that she “did not want to return to the copyright wars” in context to the discussion now being had. That’s her prerogative, of course, but copyright infringement is probably the vanguard issue that is most instructive to this moment of internal and external consideration of what platform responsibility actually means. Two decades worth of policies adopted by the major OSPs to first profit from copyright infringement and then seek to reshape copyright law itself in the courts, in academia, and in the public sphere reveal the sense of “responsibility” these companies have felt toward the people they have been exploiting. And of course when the exploited complained they were told they were wrong—that they did not understand the future.

In fact, in yesterday’s panel, I believe it was Keller who alluded to the “false dichotomy” that pits technology against rightholders, but let us not forget the origin of that bullshit narrative. Because it didn’t come from the rightholders. Shall we do a search for all the editorials posted by Techdirt, by EFF, by Lessig and Lefsetz—by copyright critics large and small—who have labeled creative rightholders as technology Luddites “clinging to old models”? That’s not the copyright owner’s narrative, it’s Big Tech’s narrative. So, if there is a false dichotomy, which now demands clarification, it ought to be recanted by the liars who wrote it and are still repeating it. That would be taking responsibility.

Interestingly enough, as a former Associate General Counsel for Google, Keller worked on the aforementioned Equustek case, and in June of 2017, she wrote a blog post for CIS in which she labeled the Canadian Court order that Google remove search results globally as an “ominous” proposal. In simple terms, this was a case in which a counterfeit business infringed Equustek’s trade secrets and then sold knock-off products via multiple sites on the web. Equustek sought and won a court order to remove the counterfeiter’s sites globally from Goolge’s search results.

I cite this example because it is comparatively straightforward. The legit company deserves the business earned by its products; consumers deserve to know what they’re buying and from whom; and there is no speech protection for trade in counterfeit goods. Equustek is also instructive because there is a clear parallel between its prayer for injunctive relief and, say, the motion picture industry’s efforts to have Google delist or demote major pirate sites, which are also not protected speech. Yet, in her 2017 post, Keller sums up the “ominous” nature of the Canadian Court order thus:

“Canada’s endorsement of cross-border content removal orders is deeply troubling. It speeds the day when we will see the same kinds of orders from countries with problematic human rights records and oppressive speech laws. And it increases any individual speaker’s vulnerability to laws and state actors elsewhere in the world. Content hosting and distribution are increasingly centralized in the hands of a few multinational companies – Google, Facebook, Apple, Amazon and Microsoft with their web hosting services, etc. Those companies have local presence and vulnerability to formal jurisdiction and real world threats of arrest or asset seizure in scores of countries.”

Apropos that first sentence, Keller asks rhetorically in the same post, “Can Russia use its anti-gay laws to make search results unavailable to Canadians?” I have two responses to this: the first is No, because the hypothetical, Russian court order would violate both Canadian and American law, which is not the case in Canada’s order to Google in Equustek. Keller, who is really citing Canada’s Michael Geist, falsely alleges that the defendant in Equustek is disseminating protected “speech and information,” which is not the case because the content is infringing and misleading in a manner that could be construed as fraudulent.

My second response is to mention that the policy view Keller seems to advocate—that the rule of law just doesn’t work in cyberspace—is exactly how we arrived at the moment in history when the Russian government is in fact exporting its agenda to the U.S. by using our own speech rights against us on social media. The Geist/Keller example of the Russian court order is pure hypothetical hysteria, but the phenomenon in which paid Russian hackers are fomenting anti-gay, and other hateful sentiments, to ratchet up divisiveness in the U.S. is a verified reality. I happen to think this makes pretty compelling evidence that the rule of lawlessness in cyberspace hasn’t worked out so well, but perhaps that’s just my inner Luddite talking.

So, although the topic of platform responsibility may be trending right now, I maintain some doubt that the OSPs can, or even should, try to protect society against the social and political effects of problematic information. That topic may be what sparked the conversation, but the complexity of that challenge, as it is currently framed, may wind up allowing the service providers to revert to the status quo, in which they moderate almost nothing and monetize almost everything.

Instead, taking on the less-challenging task of actually mitigating illegal content—copyright infringement, harassment, counterfeiting, trafficking, libel, etc.—does not require platform administrators to wade into the murky complexities of moderating speech. So, if they really mean it when they say, “We have to do better,” they can certainly start by complying with reasonable court orders and working with—rather than against—key stakeholders seeking a more lawful internet ecosystem.


Photo by David Crockett

Turns Out Money Talks in Silicon Valley

For years, producers of creative content—from individual artists to mass-media corporations—have tried to engage with internet companies (mainly Google) in an effort to stop the facilitation of rampant, unlicensed access to their material. Whether the complaint is millions of unlicensed works on YouTube, or search results leading users to pirate sites, copyright owners are all-too familiar with the dual response We can’t and We shouldn’t. This is shorthand for the internet industry’s standard claim that they can’t effectively police their platforms; and even if they could, they shouldn’t because freedom.

But as reported in January 2017, advertising giant Procter & Gamble issued a warning on behalf of global advertisers who spend a combined $70+ billion on digital, announcing that they were no longer willing to accept can’t and shouldn’t as answers to their key complaints. These were a lack of transparency (i.e. independent audit) in measuring the quality and effectiveness of digital advertising; and an inability to prevent brands from supporting intolerable content. So, terrorist recruiting videos on YouTube brought to you by Colgate just isn’t working for the brand managers anymore.  Yet, strangely, the internet companies and their bevy of think-tankers have not told these advertisers to stop hating the future and change their business models.  (Though I’d like to watch if they did.)

Fast-forward a year and the Wall Street Journal this week reports that Unilever is threatening to substantially reduce its ad buy on Facebook and YouTube if the companies do not more effectively weed out fake news and other divisive content like racism, sexism, and violence. What’s striking about this article is its concluding follow-up report that P&G’s brand officer Mark Pritchard — it was he in 2017, who charged the internet platforms to clean up their act — notes that “progress has been impressive” and that ninety-percent of his demands have been met.

It will come as no surprise to the creative community that, when revenue is at stake, the major internet companies suddenly discover that it is both technically possible and ideologically conceivable to police their platforms a bit more aggressively than they have to date. Artists and creators should follow these developments because the political, social, and financial pressure being exerted on the platform providers can make the companies more vulnerable to potential liability for infringing creative works; and this might make them a bit more cooperative about solving the “unsolvable” issue of mass infringement. By demonstrating a capacity for control (because now they have to), this underscores what should be obvious to most people — that the tradition of shrugging off the interests of rights holders has been a business decision. Period.

No doubt, many “digital rights” activists will prophesy the end of days for democracy in response to this trend toward platform responsibility; but they can take heart knowing that democracy hasn’t exactly thrived under the principles applied thus far. The assumption that all online interactions are protected speech, and that more speech is the only antidote to harmful speech, is still proving to be a destructive fallacy every second of every day. And it turns out the advertisers, whose money pays for these platforms of democracy, don’t accept that the answer to hate-speech and fake news is to just let it ride until our better angels eventually prevail. It turns out this is both bad for society and bad for business. It turns out money talks in Silicon Valley. And if that’s the only way to get internet companies to behave like citizens instead of bullies, then whatever works.

Google Says Humbug to Child Sex-Trafficking Victims

Just in time for Christmas, it seems Google is up to its Grinchy tricks in the House of Representatives, allegedly the big gun behind an effort to undermine the anti-child-sex-trafficking bill FOSTA, which is the House version of the Senate’s SESTA.  Because these bills propose to amend the liability shield in Section 230 of the Communications Decency Act (1996), the major tech firms, along with organizations like the Electronic Frontier Foundation, have worked to clobber the proposals, lobbying Members of Congress and promoting anti-SESTA campaigns to the public.

Shortly after representatives of Facebook, Google, and Twitter endured some uncomfortable grilling on the subject of Russian disinformation campaigns, the Internet Association endorsed SESTA in early November.  But according to a new editorial in The Hill by Mary Mazzio, it looks like Big Tech lobbyists are orchestrating a bill swap in the House, proposing an approach that avoids amending Section 230. Mazzio is the writer/director of the trafficking documentary film I Am Jane Doe, which apparently inspired legislative action on this issue in the first place.  She states in her article…

“This full replacement of FOSTA was done under cover of darkness, quickly and quietly, with no input on the specific language from the NGO community, victims or survivors. The bill, which now amends the Mann Act, fails to address the Section 230 problem identified in the 1st Circuit, and worse, strips away civil remedies from survivors as well as states attorneys general. The language also appears to permanently foreclose all private rights of action which victims currently have under the federal trafficking statute.”

Her reference to the 1st Circuit is to the case Doe v. Backpage in which the court read the Section 230 statute broadly enough to hold that Backpage’s owners were entirely shielded from civil litigation pursued by several trafficking victims who blamed the site for facilitating their victimization by sex-traffickers. In response to a still-developing body of evidence implicating Backpage’s active role in the trafficking of minors, Congress has sought to at least clarify that the “safe harbor” provision of Section 230 is not meant to shield online services from liability for this type of conduct.

The internet industry, with substantial help from the EFF, has tried to characterize these bills as harmful to free speech and innovation (again) and have promoted a limited body of scholarship claiming that the bills will do more harm than good for victims.  I have written several responses to the anti-SESTA campaign, but Ms. Mazzio sums it up in her description of the alleged new proposal now sitting in the House Judiciary Committee.  “The net result is a new bill which genuflects to the altar of business practices and profitability where children and trafficking victims are collateral damage.”

Collateral damage is exactly right. It’s a concept that musicians and other artists know all too well—not that their losses are comparable to what trafficking victims endure, only that the policy agenda is very familiar.   But this is the price Google & Friends say must be paid in the interest of “internet freedom,” which is actually a euphemism for their liability shields.

Big Tech’s absolutism on Section 230 is this industry’s version of the NRA saying that “spree killings are the price we pay for freedom.”  In fact, if we put it that bluntly—children being sold to be systematically raped is the price we pay for internet freedom—it seems just a little defeatist and lacking in moral authority, least of all in the year when Americans have declared they’re turning the tables on sexual harassment. It seems to me if the Democrats in Congress felt an urgency to shed both Conyers and Franken in the current climate, that it is probably not too much to ask that they give serious attention to the FOSTA proposal, keeping only the victims the foreground, and let Google’s interests be damned.

It’s hard to say that these bills will categorically help trafficking victims; they are a limited remedy at best, given the hideous nature of the crime.  But I’d like to believe we can all agree that the financial interests of the world’s largest company are less important than an effort to mitigate such egregious harm being done to kids. It is rather astounding to see that netizens (whoever the hell they are) are so self-righteous about the Net Neutrality thing that they’ll justify racist attacks and death threats aimed at Ajit Pai.  But some of these same good people are willing to allow children to be collateral damage just because Google & Co. say “free speech.”  If that’s really who we are, somebody show me how to actually break the internet because I’m all for it.


Photo by alexkich