Some Good Copyright News From Down Under

G’Day! Since there’s so much gloomy news here in the States, I thought I’d take a moment to note that Australia did a couple of pretty cool things recently.  They legalized same-sex marriage, so good on them for that.  And on the 6th of this month, they introduced a safe harbors provision to their copyright law that would exclude platform providers like Google and Facebook.  The new copyright liability shield would extend to carriage service providers, academic and cultural institutions, and organizations for the disabled.  If the bill passes as is, the legislation could prove instructive if and when the U.S. resumes debate on revision of its own safe harbor provisions.

The internet giants have strenuously lobbied the Australian government to adopt blanket safe harbors akin to those in Section 512 of the U.S. DMCA. Naturally, they’ve repeated the standard arguments that the liability shield against copyright infringement, established in this country in 1998, remains essential for free speech and the continued innovation of web platforms.  (What other argument do they ever make?)

Rights holders in the U.S., perhaps most prominently independent musicians, have tried for years to describe how the DMCA’s safe harbor provisions have had the unintended consequence of enabling the major—otherwise legal—platforms to profit from mass copyright infringement.  As described in detail in several posts, the most obvious example is YouTube, which grew to its monopsony position partly on the backs of creators whose works were constantly uploaded to the platform without license.

Because copyrighted works are uploaded by users, a platform like YouTube remains shielded from liability but still free to reap the rewards of traffic driven by the high volume of infringement. The fundamental flaw in the policy should be obvious:  where a corporation has both financial incentive and zero liability, it’s probably going to make some effort to profit from whatever conduct was supposed to be mitigated by the policy.  Both the harm done to creators and the untouchable market dominance of YouTube are unintended results of the safe harbor provisions in the DMCA.

Although presently overshadowed by more serious policy issues (and the circus), when Congress first took up review of the Copyright Act in 2013 and the Copyright Office last year began review of the DMCA, many independent artists and rights groups began to amplify the message that revision of Section 512 is long overdue.  In response, the internet industry and the familiar network of “digital rights” groups began promoting the counter-message that the status quo of the safe harbor is essential to (say it with me) free speech and innovation. And this is on top of the widely-promoted fallacy that the DMCA has predominantly been abused by rights holders to stifle speech, even where no infringement exists.

The Australian bill may yet change, but if their more narrowly tailored safe harbor provision becomes law, it could be instructive to the American creative community, if we resume discussion about Section 512 of the DMCA.  At the moment, it feels quaintly optimistic to imagine standing on the other side of so much political chaos with a still-extant republic in which to debate copyright law, but one must keep hope alive I suppose. In the meantime, what’s intriguing about Australia’s legislative process on this matter is that their bill reflects an effort to balance the intent of safe harbors with effective copyright protection—but with a contemporary understanding of online infringement that simply did not exist in the U.S. in the 1990s.  At the very least, it’s encouraging to see legislators draw a sharp distinction between public-serving, cultural institutions and the world’s largest, for-profit tech giants.  For far too long, we in the States have allowed Google & Friends to blur those lines.

Industry Voices Stick to Playbook Talking DMCA

Remember when I posted A Guide to Critiquing Copyright in the Digital Age?  Quite a few people read it and seemed to enjoy it, which is cool.  And most recently, it seems that Joshua Lamel, executive director at Re:Create, wrote an article for the Huffington Post about prospective revision to the DMCA, in which he appears to have followed this guide fairly closely.  In response, let’s see how he did based on the recommended guidelines …

1. Remind readers how cool it was when we killed SOPA.

Check! Lamel scores 100% when he writes in his lead:

“Defeating the Stop Online Piracy Act (SOPA) and PROTECT IP Act (PIPA) – products of the entertainment industry’s intense and well-financed lobbying campaign – was a watershed victory for consumers, free speech and technology innovation. But the fight is not yet over.”

2. Remind readers that all remedies to infringement are basically SOPA.

Kudos!  Lamel is into his third paragraph of an article that is supposedly about potential revision to the DMCA, but notice that he is still aligning the discussion with SOPA when he writes:

“After failing to persuade Congress to pass SOPA and PIPA, they are now targeting different entities and state legislatures, government agencies, the courts, ICANN, the European Union and international treaties – these are just some of their chosen venues. But neither their misguided demands nor the potentially disastrous consequences have changed.”

Lamel gets high marks here for remaining entirely detached from reality, glossing over basic truths, like the fact that rights holders are just beginning to ask lawmakers to review DMCA safe harbor provisions. Or the fact that safe harbors have been consistently abused by many of the largest ISPs and platforms, getting away with tens of millions of monetized infringements. Lamel successfully avoids any acknowledgement that the outdated DMCA problem is substantial; that the conversation is in early stages; that there are no specific remedy proposals on the table; and—this is really why his grade is so high–as an attorney, Lamel knows full well that the mechanisms in DMCA have nothing to do with SOPA.  So, he scores another 100% for meeting the goals of Guideline #2.

3. Remind readers that the copyright industries hate the future.

Remember:  the purpose of this guideline is to help the reader avoid considering specifics and reduce the conversation to good guys and bad guys. Lamel’s work here is solid but I feel could have been stronger.  While his article does contain implications that the entertainment industry is willing to stifle the potential of the Internet, I’m going to have to give him an 85% for failing to suggest that the entertainment industry is eager to stifle the potential of the Internet.  This isn’t bad …

“The implementation of excessive and over-broad intellectual property protection measures would strangle the freedom and innovation essential to growth of the Internet.”

But after four years of repeating the theme “strangling the growth of the Internet”, it just doesn’t make the blood boil like it did in 2012.  Readers may be less susceptible to this kind of vague, scare-mongering gibberish. In particular, anyone who might be following the real story may even notice that many of the remedies people keep saying will stifle speech and innovation have actually been applied repeatedly in the U.S. and abroad without stifling speech or innovation.  Lamel did good work here, but it’s a little mailed in.

4. Make some crazy shit up.

In scoring Lamel on this guideline, however, I have to give extra credit.  It’s honestly hard to pick which crazy shit to highlight as his best work.  It could be conflating the US, Europe, and the DMCA in the same paragraph despite the fact that Europe doesn’t have the DMCA.  It might be the specifically burdensome mechanisms he alludes to despite the absence of even a single word’s worth of proposed revision to the existing law.  But I think the most impressive made up crazy shit is this:

Imagine a world where just the mere allegation of infringement would permanently keep that content down. This would have huge implications for everyone when it comes to sharing a video on Facebook or quoting song lyrics. That’s because social media networks would be forced to suppress user generated content, as they would not know if it was licensed or not. Parents can forget posting videos of their kids dancing to music and candidates would not be able to post campaign speeches because of the music that plays in the background. Remix culture and fan fiction would likely disappear from our creative discourse. Live video streaming sites would cease to exist. Notice and staydown might seem innocuous, but in reality it is content filtering without due process.

High marks indeed. Not only does Lamel cite a whole range of ordinary, social media activity that would be entirely unaffected by a prospective tweak to Section 512 of the DMCA, not only does he ignore the fact that the entertainment industry continues to forge new deals to support remix culture and fan fiction, etc., but he leaps all the way over the candlestick to insist that a revision to DMCA (which has not even been substantively discussed) will automatically remove due process from the law as it currently stands. In particular, this assertion can help the reader ignore the fact the requirements in question in the DMCA are more akin to voluntary conditions that are either met or not met prior to actual legal proceedings.  So, this is some excellent made up crazy shit, and I give Lamel an extra fifty points for a score of 150% on meeting the goals of this guideline.

5. Write a misleading headline.

This is a tough one to score. Technically, both the headline and the article are grossly misleading, and the headline will garner Likes and shares and retweets by users who won’t bother to read the article.  So, all of those are points in Lamel’s favor.  On the other hand, the promise of the headline is fulfilled by the article, which is a departure from the guideline recommendations to be even more misleading.  It feels like an 80%, giving Lamel a final grade of 103% for adhering to the Guidelines to Critiquing Copyright in the Digital Age.


But in all seriousness …

Both the takedown procedures and the safe harbor provisions in the DMCA are mechanisms of great importance to ISPs, copyright holders, and general users.  And there is no straight line dividing the needs and concerns of the various parties into side a or b.  Many copyright holders have a strong interest in safe harbors, yet the tens of millions of takedown notices sent monthly by some rights holders–in the chronic game of Whack-a-Mole–over large-scale infringement was never envisioned when the DMCA was passed in 1998.

Any proposed revision to DMCA will seek the same balance as was initially sought in the law; it will contain language that can be debated and discussed; and proposed remedies may or may not include the kind of algorithmic “filtering” alluded to by Lamel and others.  When we look at cases like BMG V COX, the Grooveshark case, and this recently announced suit by photographer Jen Reilly against Twitter, we see that a chronic point of contention among rights holders is that ISPs push the limits of good faith with regard to the safe-harbor conditions as written in the DMCA today.  Hence, it is feasible that these behaviors can be remedied without requiring any new technological paradigm. To say otherwise is jumping to very early conclusions while ignoring the real problem.

If and when DMCA revision becomes truly active, we can expect this same kind of editorial from the same voices; but at this point in the discussion, the fact that Re:Create and EFF are already  leading with straw man arguments is typical of the kind of “cooperation” rights holders are used to from many of the companies these organizations represent.

Is accountability losing value?

“I don’t know how that bong got in my sock drawer! And what’s a bong anyway?”

When was the last time you used the I Didn’t Know defense; or if you’re a parent, when was the last time you were confronted with the I Didn’t Know defense? Did it work? Not so much, right? But it often feels lately as though the soul of the IDK defense is gaining social clout as some of the darker realities of Web 2.0 collide with some perfectly good laws written in the era of Web 1.0. As such, does the technology we expect to provide transparency simultaneously diminish the value of accountability?

Provisions in U.S. laws such as the Digital Millennium Copyright Act of 1998 and the Communications Decency Act of 1996, provide important safe harbors that protect site owners against civil litigation* for actions performed by third parties while using their sites. But when a site is a large enterprise with millions of pages and tens of millions of users around the world, and if it is ad-supported so that all traffic has a profit motive, the safe harbor thing becomes a rather complicated point of contention between site owners and any party harmed by activity on a site.  The scale of the site, the number of users, the volume of UGC (user generated content), and the safe harbor provisions all give a site owner considerable leverage when applying the IDK defense, even in instances where we might reasonably intuit that the owner knows precisely what’s happening on his site and more or less what certain activity is worth to him financially.  While safe harbors are vital protections — I wouldn’t want to publish without them — the practical reality is that an owner can theoretically have it both ways;  he can profit from illicit traffic by doing nothing to stop it even if he knows it’s there, and also claim to be “shocked gambling is going on in his establishment” if he is named in a suit or indictment.

The fundamental argument of ignorance is consistently the basis of defense for site owners ranging from piracy sites to Google to salacious gossip sites to the now-busted dark web drug-trafficking site Silk Road, whose alleged operator is on trial in New York. Prosecutors are charging founder Ross Ulbricht with being both mastermind and manager of a $1.2 billion criminal enterprise, and Ulbricht’s primary defenses appear to rest on I didn’t know and It wasn’t me, meaning that he denies being the man behind the avatar Dread Pirate Roberts, known to be the operator of the site. I won’t presume to comment on the particulars of the case other than to guess that with an alleged $80 million in Ulbricht’s account at the time of his arrest, if he’s not the guy, he’s probably got some ‘splainin to do.

But it was actually the comments I read below one story about the Silk Road trial that spawned this post. Unconcerned with the gravity of unchecked criminal activity occurring on that anonymous marketplace — activity that reportedly included murder-for-hire schemes — the comment that got my attention was one which stated that the only reason Silk Road was shut down and Ulbricht indicted is that the government didn’t like that it was a market over which it had no control. Maybe this is the rant of one naive kid who represents a tiny population of naive kids, but I have to wonder because the spirit of that comment is just a slight variation on the themes of the Web as Wild West or the entrepreneurial zeitgeist of the industry with its imperative disrupt everything. It all smacks just a bit of embracing anarchy, which is often confused with freedom, though is in fact the fertile ground of feudalism or totalitarianism.

“The mob is the mother of tyrants.” – Diogenes

Again, I strongly believe safe harbors are essential, and I am not qualified to suggest any workable revisions, but I do think the larger notion of accountability may be losing value the more we live in a hybrid society between the real and the digital. As we connect to one another, we also diffuse responsibility for certain actions, which has the opposite intended effect of building those connections because that diffusion actually places greater distance between our actions and those who may be harmed by them. Because the Web spreads responsibility for actions across thousands or millions of people, this often feeds the exact opposite types of behaviors that are presumed will manifest in a self-governed environment.

We saw this with Reddit’s initial refusal and then reluctant agreement to shut down threads devoted to exploiting leaked nude photos of celebrities. The apparent logic among Reddit’s management and users was that because they did not steal the photos and the photos are now “out there,” nobody is really responsible for their distribution or exploitation; and since the images are tied to a news story, the photos are kinda like freedom of the press, right? In a non-web context, like TV news, we’d probably say that showing the photos would be a non-journalistic exploitation of these individuals in order to drive ratings. I’m not saying TV never does this, only that we seem to recognize it for what it is via that medium. Yet, when the medium is a web platform like the boards of Reddit, and the editorial decision to “broadcast” certain material is crowdsourced, somehow the moral assessment of that exploitative decision is skewed because the mob is now responsible, which means nobody is responsible.

And one question we should ask is what happens when illegal or harmful activities become more automated, when accountability is even further removed from individuals to whom laws and judgments may apply? Think about it. Illegal or tortious actions can be committed by bots the same way junk email is delivered. Or maybe you go on vacation for a few days only to find out that your “smart devices” ordered up a few bottles of oxycontin, some assault rifles, and a fake ID. Sound absurd? Maybe, but . . .

Check out this story about a pair of Swiss programmers who created an art installation called The Darknet: From Memes to Onionland, which offered a display of items that were purchased by a bot shopping autonomously on a dark web marketplace akin to Silk Road. The concept was to give the bot a weekly allowance, see what it purchased of its own accord, and then display the items in a gallery setting. Interpret the statement as you will. I would personally defend the actions of these programmers as an artistic expression, although the article cited does raise interesting questions as to who might be responsible for the illegal items such as ecstasy and a counterfeit passport that were purchased by the bot. In fact, the artist/programmers stated that they take full responsibility for this contraband, which is refreshing, and I certainly don’t think they should face any criminal penalties for possession. But their experiment suggests to me that even before we answer some of the tricky challenges posed by safe harbor provisions, the IDK defense is about to gain a new phrase: “Wasn’t me. The bot did it!”

*Changed from original publication, which erroneously referred to liability and criminal activity.  Thanks to a friend for correcting the mistake.