“I don’t know how that bong got in my sock drawer! And what’s a bong anyway?”
When was the last time you used the I Didn’t Know defense; or if you’re a parent, when was the last time you were confronted with the I Didn’t Know defense? Did it work? Not so much, right? But it often feels lately as though the soul of the IDK defense is gaining social clout as some of the darker realities of Web 2.0 collide with some perfectly good laws written in the era of Web 1.0. As such, does the technology we expect to provide transparency simultaneously diminish the value of accountability?
Provisions in U.S. laws such as the Digital Millennium Copyright Act of 1998 and the Communications Decency Act of 1996, provide important safe harbors that protect site owners against civil litigation* for actions performed by third parties while using their sites. But when a site is a large enterprise with millions of pages and tens of millions of users around the world, and if it is ad-supported so that all traffic has a profit motive, the safe harbor thing becomes a rather complicated point of contention between site owners and any party harmed by activity on a site. The scale of the site, the number of users, the volume of UGC (user generated content), and the safe harbor provisions all give a site owner considerable leverage when applying the IDK defense, even in instances where we might reasonably intuit that the owner knows precisely what’s happening on his site and more or less what certain activity is worth to him financially. While safe harbors are vital protections — I wouldn’t want to publish without them — the practical reality is that an owner can theoretically have it both ways; he can profit from illicit traffic by doing nothing to stop it even if he knows it’s there, and also claim to be “shocked gambling is going on in his establishment” if he is named in a suit or indictment.
The fundamental argument of ignorance is consistently the basis of defense for site owners ranging from piracy sites to Google to salacious gossip sites to the now-busted dark web drug-trafficking site Silk Road, whose alleged operator is on trial in New York. Prosecutors are charging founder Ross Ulbricht with being both mastermind and manager of a $1.2 billion criminal enterprise, and Ulbricht’s primary defenses appear to rest on I didn’t know and It wasn’t me, meaning that he denies being the man behind the avatar Dread Pirate Roberts, known to be the operator of the site. I won’t presume to comment on the particulars of the case other than to guess that with an alleged $80 million in Ulbricht’s account at the time of his arrest, if he’s not the guy, he’s probably got some ‘splainin to do.
But it was actually the comments I read below one story about the Silk Road trial that spawned this post. Unconcerned with the gravity of unchecked criminal activity occurring on that anonymous marketplace — activity that reportedly included murder-for-hire schemes — the comment that got my attention was one which stated that the only reason Silk Road was shut down and Ulbricht indicted is that the government didn’t like that it was a market over which it had no control. Maybe this is the rant of one naive kid who represents a tiny population of naive kids, but I have to wonder because the spirit of that comment is just a slight variation on the themes of the Web as Wild West or the entrepreneurial zeitgeist of the industry with its imperative disrupt everything. It all smacks just a bit of embracing anarchy, which is often confused with freedom, though is in fact the fertile ground of feudalism or totalitarianism.
“The mob is the mother of tyrants.” – Diogenes
Again, I strongly believe safe harbors are essential, and I am not qualified to suggest any workable revisions, but I do think the larger notion of accountability may be losing value the more we live in a hybrid society between the real and the digital. As we connect to one another, we also diffuse responsibility for certain actions, which has the opposite intended effect of building those connections because that diffusion actually places greater distance between our actions and those who may be harmed by them. Because the Web spreads responsibility for actions across thousands or millions of people, this often feeds the exact opposite types of behaviors that are presumed will manifest in a self-governed environment.
We saw this with Reddit’s initial refusal and then reluctant agreement to shut down threads devoted to exploiting leaked nude photos of celebrities. The apparent logic among Reddit’s management and users was that because they did not steal the photos and the photos are now “out there,” nobody is really responsible for their distribution or exploitation; and since the images are tied to a news story, the photos are kinda like freedom of the press, right? In a non-web context, like TV news, we’d probably say that showing the photos would be a non-journalistic exploitation of these individuals in order to drive ratings. I’m not saying TV never does this, only that we seem to recognize it for what it is via that medium. Yet, when the medium is a web platform like the boards of Reddit, and the editorial decision to “broadcast” certain material is crowdsourced, somehow the moral assessment of that exploitative decision is skewed because the mob is now responsible, which means nobody is responsible.
And one question we should ask is what happens when illegal or harmful activities become more automated, when accountability is even further removed from individuals to whom laws and judgments may apply? Think about it. Illegal or tortious actions can be committed by bots the same way junk email is delivered. Or maybe you go on vacation for a few days only to find out that your “smart devices” ordered up a few bottles of oxycontin, some assault rifles, and a fake ID. Sound absurd? Maybe, but . . .
Check out this story about a pair of Swiss programmers who created an art installation called The Darknet: From Memes to Onionland, which offered a display of items that were purchased by a bot shopping autonomously on a dark web marketplace akin to Silk Road. The concept was to give the bot a weekly allowance, see what it purchased of its own accord, and then display the items in a gallery setting. Interpret the statement as you will. I would personally defend the actions of these programmers as an artistic expression, although the article cited does raise interesting questions as to who might be responsible for the illegal items such as ecstasy and a counterfeit passport that were purchased by the bot. In fact, the artist/programmers stated that they take full responsibility for this contraband, which is refreshing, and I certainly don’t think they should face any criminal penalties for possession. But their experiment suggests to me that even before we answer some of the tricky challenges posed by safe harbor provisions, the IDK defense is about to gain a new phrase: “Wasn’t me. The bot did it!”
*Changed from original publication, which erroneously referred to liability and criminal activity. Thanks to a friend for correcting the mistake.
The problem is that, like William Gibson’s future, accountability is unevenly distributed, and the burden of it falls mostly on those who can afford it least. Think of how you have to take a drug test to work at Wal-Mart but not at Bear Sterns. Or how the “losers” who defaulted on their mortgages were vilified more than the predatory loan companies.
Another thing I’ve noticed is that people seem to be held more accountable for their ideas than their actions. The meme “freedom of speech doesn’t mean freedom from consequences” basically seems to mean “if I’m not in the government, I can target someone’s speech as much as I want.” The extreme version of this was, of course, Charlie Hebdo, where free speech was not threatened by government action at all.
I’m bemused by the trend of calling out people who make racist tweets. Yes, these people are saying vile things, but many are young people whose opinions can change at the drop of a hat. Being held forever accountable for something one said at 15 is a pretty bad idea, IMHO. As well, getting some busboy fired for tweeting something about black people does nothing to stop systemic racism, and may even encourage resentment.
I dunno that accountability is losing value.. it’s losing meaning.
Ala google, all you need to do is say you’re going to change (just to bury pressure by law and public and buy yourself another year of profiting off slimy behavior) or just blame it on an algorithm. (skipping over the fact that they very carefully write said algorithms to have a very specific outcome).
Or with Wal-Mart, destroys communities as it actually can COST nearly a half to a Billion dollars (per store, per year) in welfare costs and other public assistance just so they can sell 5-cents lower than a competitor. In their orientation as part of their corporate structure, they actually hand out public assistance forms to new employees and instruct them how to fill out and apply…just because of their horrid wages, a “full time” employee qualifies for welfare.. that is them being “accountable”.
Or to certain Congressmen allowing large corporations to LITERALLY write legislation… I usually balk at some of my “southern” relatives chiming “we need to take our country back” (as I’m positive they have an entirely different meaning to their chorus) but, there is definitely something to be said about the phrase…
Please, David. Apply your principles evenly and most especially to those who own creators a proper royalty accounting and a fair distribution of pooled proceeds. Only one of the reasons yours is such an up hill battle is that those who own music, movies, books and the like refuse to apply a fiduciary’s care to the proceeds of art, culture and knowledge.
Huh?
It the future it will become increasingly difficult to blame some meat puppet for the actions of a machine that takes actions based on learned behavior. Take self-driving cars for instance. Any sort of attribution will be increasingly difficult too. Is your assigned identity really part of you? Consider identity theft is mostly a modern issue.
So given your cynical substitution of the term “meat puppet” for “person,” can we assume that none of the contrarian ideas you’ve posed on this blog are your own? Or more properly, that they are the ideas of a human? What’s the difference between an anonymous author who appears to embrace or accept the end of human relevance and a bot doing a bit of PR for its species?
Or more properly, that they are the ideas of a human? What’s the difference between an anonymous author who appears to embrace or accept the end of human relevance and a bot doing a bit of PR for its species?
You can’t know.
M wrote:
it will become increasingly difficult to blame some meat puppet for the actions of a machine that takes actions based on learned behavior. Take self-driving cars for instance. Any sort of attribution will be increasingly difficult too.
I assume the liability will be on the manufacturer of whatever system failed. It’s not as if specific people are blamed for recalls now, or held personally liable for deaths stemming from those failures. I don’t see things being much different with self-driving cars.
Consider identity theft is mostly a modern issue.
Identity theft has been around for centuries. For instance, before photo IDs you simply killed a person and assumed his identity. It’s a classic plot device in theatre and opera.
In Genesis, Jacob assumes the identity of his brother. It’s not a new crime, it simply takes on a different character in the information age.
David wrote:
What’s the difference between an anonymous author who appears to embrace or accept the end of human relevance and a bot doing a bit of PR for its species?
Damn! He’s on to us! Scramble, bots! Scramble! Regroup in the Matrix!
[youtube http://www.youtube.com/watch?v=-bYAQ-ZZtEU&w=595&h=365%5D