DCA Report: Users Demand Some Accountability For Platforms

On December 31, 2016, in a post called The Morning After or Social Media is a Humbug, I wondered whether or not 2017 would be the year when users, advertisers, and even the major web platforms would begin to demand more accountability online and move away from the general belief that a laissez-faire approach to all internet governance was universally beneficial.

After the election, many citizens woke up to the reality of fake news and consequently reaffirmed some faith in traditional journalism with an immediate spike in subscriptions. In March, we saw major brand advertisers threaten to boycott Google if the search and ad giant did not figure out how to keep brand ads away from toxic content like terrorist propaganda videos.  And this morning, Digital Citizens Alliance released a new report, Trouble in Our Digital Midst, indicating that a majority of Americans may be losing trust in the internet as a source of reliable information and as a secure environment.

Building on past studies, like the overall proliferation of malware on pirate sites and trojan horse viruses used to prey on minors, DCA’s 2017 poll comprising 1,240 respondents indicates that approximately 60% of Americans currently favor the web companies taking more responsibility for the manner in which their platforms are used.  Just a few years ago, it seemed that people largely accepted the premise that online platforms should remain neutral on the assumption that it was better to allow a few bad actors to slip through the net than to risk “stifling the speech” of innocent parties. But as the potential toxicity of fake news, malware scams, terrorist propaganda, and major online hacks have become more common and high-profile, that mood appears to be shifting.

In addition to sharing its findings, the DCA compliments major players like Google and Facebook for at least altering their standard response to the ills of bad actors …

“… digital platforms over the last year have shown a new willingness to intervene, impact, or even alter the content on their platforms on issues of national importance. Given that they have opened the door, they must take a fresh and holistic look at all illicit goods, services, content, and behavior on their platforms. The response, ‘we’re just a platform,’ clearly is not the answer in response to the Fake News problem and objectionable content that has brand name advertising imprinted upon it, and it shouldn’t be the answer when it comes to stolen credit cards, counterfeit goods, illicit drugs or pirated movies, TV shows and music, or the violation of our young.” 

This new report notes that 2017 was the first time the Federal Trade Commission issued a consumer warning about the increased likelihood that visiting pirate sites will expose users to malware attacks, leaving them vulnerable to ransom demands, identity theft, and computer slaving that preys on kids by exploiting their webcams and microphones. DCA also reminds readers of the 2015 research by RiskIQ, which found that on the dark web, where hackers pay pirate site owners to distribute malware, that business was over $70 million year at the time of the study.  “Take a moment to think about that – if hackers are paying content theft websites $70 million to drop malware on their sites that infect visitor computers, how much are they making?” asks the report.

DCA proposes what it calls a “neighborhood watch” approach to address these growing problems with a new mindset.  Primarily, this would involve the major platforms doing a better job of sharing information with one another regarding bad actors the same way retailers and other industry competitors do for the overall health of their markets. “While digital platforms collaborate on policy and technical issues, there is no evidence that they are sharing information about the bad actors themselves. That enables criminals and bad actors to move seamlessly from platform to platform,” the report states.

I’m not surprised to see Google and Facebook change their tune at least a little bit this year.  The threat of boycott by the advertisers who pay the bills was sure to get a response; as would the prospect of shedding users who may become disenchanted with Facebook if it were overwhelmed by fake news, trolls, and a psychos who share live video of murder.   The DCA acknowledges the fact that it is a major challenge to weed out hackers, counterfeiters, pirates, and violent extremists from using the internet as a base of operations without harming the free-flow of interaction for the rest of us.  Still, it is at least a step in the right direction if users are indeed beginning to understand that no community–perhaps least of all a virtual one–thrives without reasonable boundaries to protect safety and fair trade.

Is accountability losing value?

“I don’t know how that bong got in my sock drawer! And what’s a bong anyway?”

When was the last time you used the I Didn’t Know defense; or if you’re a parent, when was the last time you were confronted with the I Didn’t Know defense? Did it work? Not so much, right? But it often feels lately as though the soul of the IDK defense is gaining social clout as some of the darker realities of Web 2.0 collide with some perfectly good laws written in the era of Web 1.0. As such, does the technology we expect to provide transparency simultaneously diminish the value of accountability?

Provisions in U.S. laws such as the Digital Millennium Copyright Act of 1998 and the Communications Decency Act of 1996, provide important safe harbors that protect site owners against civil litigation* for actions performed by third parties while using their sites. But when a site is a large enterprise with millions of pages and tens of millions of users around the world, and if it is ad-supported so that all traffic has a profit motive, the safe harbor thing becomes a rather complicated point of contention between site owners and any party harmed by activity on a site.  The scale of the site, the number of users, the volume of UGC (user generated content), and the safe harbor provisions all give a site owner considerable leverage when applying the IDK defense, even in instances where we might reasonably intuit that the owner knows precisely what’s happening on his site and more or less what certain activity is worth to him financially.  While safe harbors are vital protections — I wouldn’t want to publish without them — the practical reality is that an owner can theoretically have it both ways;  he can profit from illicit traffic by doing nothing to stop it even if he knows it’s there, and also claim to be “shocked gambling is going on in his establishment” if he is named in a suit or indictment.

The fundamental argument of ignorance is consistently the basis of defense for site owners ranging from piracy sites to Google to salacious gossip sites to the now-busted dark web drug-trafficking site Silk Road, whose alleged operator is on trial in New York. Prosecutors are charging founder Ross Ulbricht with being both mastermind and manager of a $1.2 billion criminal enterprise, and Ulbricht’s primary defenses appear to rest on I didn’t know and It wasn’t me, meaning that he denies being the man behind the avatar Dread Pirate Roberts, known to be the operator of the site. I won’t presume to comment on the particulars of the case other than to guess that with an alleged $80 million in Ulbricht’s account at the time of his arrest, if he’s not the guy, he’s probably got some ‘splainin to do.

But it was actually the comments I read below one story about the Silk Road trial that spawned this post. Unconcerned with the gravity of unchecked criminal activity occurring on that anonymous marketplace — activity that reportedly included murder-for-hire schemes — the comment that got my attention was one which stated that the only reason Silk Road was shut down and Ulbricht indicted is that the government didn’t like that it was a market over which it had no control. Maybe this is the rant of one naive kid who represents a tiny population of naive kids, but I have to wonder because the spirit of that comment is just a slight variation on the themes of the Web as Wild West or the entrepreneurial zeitgeist of the industry with its imperative disrupt everything. It all smacks just a bit of embracing anarchy, which is often confused with freedom, though is in fact the fertile ground of feudalism or totalitarianism.

“The mob is the mother of tyrants.” – Diogenes

Again, I strongly believe safe harbors are essential, and I am not qualified to suggest any workable revisions, but I do think the larger notion of accountability may be losing value the more we live in a hybrid society between the real and the digital. As we connect to one another, we also diffuse responsibility for certain actions, which has the opposite intended effect of building those connections because that diffusion actually places greater distance between our actions and those who may be harmed by them. Because the Web spreads responsibility for actions across thousands or millions of people, this often feeds the exact opposite types of behaviors that are presumed will manifest in a self-governed environment.

We saw this with Reddit’s initial refusal and then reluctant agreement to shut down threads devoted to exploiting leaked nude photos of celebrities. The apparent logic among Reddit’s management and users was that because they did not steal the photos and the photos are now “out there,” nobody is really responsible for their distribution or exploitation; and since the images are tied to a news story, the photos are kinda like freedom of the press, right? In a non-web context, like TV news, we’d probably say that showing the photos would be a non-journalistic exploitation of these individuals in order to drive ratings. I’m not saying TV never does this, only that we seem to recognize it for what it is via that medium. Yet, when the medium is a web platform like the boards of Reddit, and the editorial decision to “broadcast” certain material is crowdsourced, somehow the moral assessment of that exploitative decision is skewed because the mob is now responsible, which means nobody is responsible.

And one question we should ask is what happens when illegal or harmful activities become more automated, when accountability is even further removed from individuals to whom laws and judgments may apply? Think about it. Illegal or tortious actions can be committed by bots the same way junk email is delivered. Or maybe you go on vacation for a few days only to find out that your “smart devices” ordered up a few bottles of oxycontin, some assault rifles, and a fake ID. Sound absurd? Maybe, but . . .

Check out this story about a pair of Swiss programmers who created an art installation called The Darknet: From Memes to Onionland, which offered a display of items that were purchased by a bot shopping autonomously on a dark web marketplace akin to Silk Road. The concept was to give the bot a weekly allowance, see what it purchased of its own accord, and then display the items in a gallery setting. Interpret the statement as you will. I would personally defend the actions of these programmers as an artistic expression, although the article cited does raise interesting questions as to who might be responsible for the illegal items such as ecstasy and a counterfeit passport that were purchased by the bot. In fact, the artist/programmers stated that they take full responsibility for this contraband, which is refreshing, and I certainly don’t think they should face any criminal penalties for possession. But their experiment suggests to me that even before we answer some of the tricky challenges posed by safe harbor provisions, the IDK defense is about to gain a new phrase: “Wasn’t me. The bot did it!”

*Changed from original publication, which erroneously referred to liability and criminal activity.  Thanks to a friend for correcting the mistake.