DMCA Review Begins. Watch the Red Flag.

Early last week, the Senate Judiciary Committee held the first in what will be a year-long series of hearings (roughly one per month) to review the Digital Millennium Copyright Act.  Almost as old as the publicly-available internet itself, the 1998 DMCA expressed the best efforts of Congress to predict how the digital market might evolve and to, therefore, strike a balance between the interests of internet service providers (ISPs) and copyright owners.

Over the intervening twenty-two years, much—MUCH—has been written, debated, shouted, flung, wailed, opined, and scorned about the DMCA, specifically Titles I and II of the five-title statute.  If we ask the tech-centric/copyright-skeptics, they are likely to say that Title I (§1201) is a disaster and that Title II (§512) is working just fine; while the creator/copyright proponent will tell us exactly the opposite. I cannot condense the number of issues raised in this first hearing alone into a single post—especially when §1201 and §512 address very different legal regimes—and it is far too early in the review process to respond to any specific proposals being made. 

What I will reiterate in this post is that the greatest concern to creators of every size is the conditional liability shield (“safe harbor”) provided to web platforms by §512.  It is the foundation of the oft-described “whack-a-mole” problem whereby the independent author attempts to remove infringing uses of her works one-by-one, only to have them reappear on the same platform(s) faster than she can prepare new notices.  (And “whack-a-mole” can be just as big a problem for a small business like an apparel maker as it is for a traditional artist like a musician.)  

In response to this futile battle with online infringement, authors often give up enforcement via the DMCA takedown process (resigned to donating even more revenue to billion-dollar corporations) while they ask as a community why the major platforms in particular cannot do a better job of preventing protected works from being chronically re-uploaded without license.  This second question is where we step into a BIG policy kerfuffle with regard to §512, and I imagine it is a topic about which we are going to hear a lot of ideas and a lot of noise.  

This week’s hearing hosted two panels of witnesses, the first of which provided an overview as to how the DMCA came to be; while the second panel, comprising IP academics, provided some insight as to where the DMCA debate may be heading.  In the interest of keeping this post containable, I will focus on the testimonies of Professor Sandra Aistars of the George Mason School of Law and Professor Rebecca Tushnet of Harvard Law School, and the subject of “red flag” knowledge under the DMCA.  

What is “Red Flag” Knowledge?

Unfortunately, you will get different answers depending on whom you ask, including a court split on the matter if you ask either the Second or the Ninth Circuit Court of Appeals.  But in everyday life, “red flag” knowledge is a reasonable, common-sense inference that one can draw from a modest amount of empirical evidence and experience.  If you enter the house to find trash strewn across the floor and a chagrined puppy in the corner, you will not need training in forensic science to have “red flag” knowledge that either the dog has committed a misdemeanor, or he has been artfully framed by the cat.  

That roughly describes the degree of analysis Congress intended ISPs to perform when encountering evidence of copyright infringement on their platforms.  As Professor Aistars noted, “Although Congress did not obligate service providers to actively seek out infringements, it did require them to act expeditiously to remove infringing materials once they have knowledge or awareness of infringing activity on their networks.” (See companion Appendix describing basic ISP Conditions.)

For example, let us imagine that the users of a web platform we’ll call Vimeo are making videos using some famous music we’ll call Beatles songs.  Any ordinary observer can reasonably assume that these users probably did not license these sound recordings; yet in the case Capitol Records v. Vimeo, the Second Circuit held, on the issue of “red flag” knowledge, that the platform’s operators would have needed either legal or music-industry expertise in order to discover infringement.

Keeping in mind that voluntary removal of material based on “red flag” knowledge of infringement is a condition of an ISP’s “safe harbor,” decisions like Vimeo do more than erase this part of the statute—they exacerbate a culture of infringement through court-sanctioned willful blindness.  And as Aistars added in her testimony, “Pointedly, this occurred in a case where discovery had revealed emails from managers to employees winkingly encouraging infringement.”  Thus, Aistars is among those who would advocate clarifying the meaning of “red flag” to restore the intent of §512.

The Vimeo emails Aistars mentions are typical of the shoulder shrugs and middle fingers creators are used to receiving from many platform operators, and application of the DMCA to date has unquestionably fostered cultural attitudes anathema to the kind of cooperation between ISPs and rightsholders Congress specifically intended to promote two decades ago.  Further, unintended endorsement of this culture among site operators may be exacerbating a persistent misunderstanding among individual and commercial users that the internet is a realm of automatic immunity.  As I have described in several posts, this misconception can cause unnecessary trouble for both creators and users of protected works.

Responses to Fixing “Red Flag” 

Anticipating the likelihood that, if there is to be any revision to §512 at all, “red flag” will be a major point of debate, Professor Tushnet warned against what she and others see as throwing out the proverbial baby with the bathwater.  “If there is one message I would ask the members of the Committee to take away today,” she stated in her opening testimony, “it is that most beneficiaries of §512 are not Google or Facebook.”  Tushnet cautions that if we were to amend §512 solely as a response to the challenges creators face on very large, commercial platforms like YouTube, we risk simultaneously putting compliant, smaller platforms out of operation and facilitating even greater monopolization by the largest entities.

As a statistical matter, Tushnet is making a “few bad apples” argument, except for the fact that some of the baddest apples in the bunch happen to be the most powerful, wealthiest internet companies in the world.  So, even if we take her premise and data at face value (i.e. that millions of compliant sites rely on §512 to exist), this does not recommend ignoring the catalog of evidence that application of the DMCA has promoted willful blindness among the operators of major ISPs.  Simply put, if twenty-million sites operate without harm while one site does harm to twenty-million creators, we still have a problem if the law shields that one site from liability.  So, the status quo cannot be the final answer.  

As a practical consideration, Tushnet’s argument is based on the assumption that a more clearly defined restoration of the intent of “red flag” knowledge can only be implemented by technological measures, which only the largest ISPs can afford.  Hence, her argument that this will result in entrenching, for instance, YouTube’s monopoly position, notably glossing over the fact that there are other forces entrenching online monopolies.  While this technology-investment argument is worthy of discussion, the aforementioned Vimeo case is just one example in which the principle of “red flag” knowledge was obliterated in a purely human paradigm (i.e. human managers choosing not to see what was right in front of them).

Post Hoc Ergo Propter Hoc? (or not all good things come from §512.)

As Tushnet testified, her own Organization for Transformative Works site hosts over “four-million works” yielding 1.2 billion page views per month, while the site receives takedown notices at a rate of less than one per month, most of which are invalid.  Assuming these data are correct, the site to which she refers seems barely relevant as an example. It is a large fanfic platform with what appears to be a vast amount of material—mainly short works of written text—that is highly unlikely to infringe.  No sound recordings.  No photographs.  No film clips.  At most, some fanfic writer could maybe—and I mean maybe—run afoul of a derivative works right. 

From a cursory review of OTW, it is not at all evident that adopting a clearer, statutory definition of “red flag” (in order to hold the majors accountable) would force a site like this one to invest in prohibitively expensive technology in order to remain complaint.  If the platform is indeed receiving takedown notices at a rate of less than one valid notice per month, this is most likely evidence that the site hosts little to no infringing material—and that when notices are received, human review is sufficient to the task.  Further, the fact that the site hosts “fandoms” for a long list of works owned by major motion picture studios indicates that infringement must be very low to near zero if it has not invited the attention of an industry with the resources to send notices in volume.  

As is often the case, defenders of the status quo (the same is true for Section 230 of the CDA) will say “look at all the benefits this law has yielded” and then point to examples that, under scrutiny, do not necessarily rely on the liability shield so substantially as may be asserted.  In this vein, Tushnet’s testimony includes several references to all manner of good news about the creative industries—more movies, TV, music, etc. than ever before—but it would be a logical stretch to assert that, for instance, Billie Eilish’s YouTube-to-Grammy-Awards success story owes much at all to §512—let alone the collapse of the “red flag” principle. 

As Chairman Tillis noted, “this is a very wonky subject,” and that last description of mine was very wonky indeed; but DMCA review will be a devil-in-the-details story to watch.  Despite the hyperbole that will inevitably seep onto social media about these hearings, it is neither practical nor desirable for rightsholders to seek obliteration of the safe harbor altogether—that is not the goal.  But at the same time, it cannot be acceptable that a statute designed to mitigate copyright infringement and incentivize cooperation has served to reward infringement and position ISPs and rightsholders at permanent loggerheads.  


*This case is further complicated by a conflict between state and federal law over the use of sound recordings made prior to 1972, but that’s a whole other bowl of noodles. 

Photo source by Robertobinetti70

Senate Hearings:  A Sea Change for Social Media Companies & Users

Yesterday afternoon the Senate Judiciary Committee held a hearing entitled:  “Extremist Content and Russian Disinformation Online:  Working with Tech to Find Solutions.” Representing the social media companies were Colin Stretch, General Counsel at Facebook; Sean Edgett, Acting General Counsel at Twitter; and Richard Salgado, Director of Law Enforcement And Information Security at Google.

The news to come out of this hearing will not compete with the blockbuster revelations produced the day before by Special Prosecutor Robert Mueller; but in the long run, it may prove to be more important.  Because regardless of who in the current administration may yet be implicated in Russia’s disinformation campaign aimed at the United States, the matters of greatest significance are that it happened, the ways in which it happened, and that is still happening.  And it’s not all about Russia.

Some Background

Shorthand terms like “Russian hacking” do not properly describe the nature of what’s going on; and the significance of what’s going on should be understood as separate from any collusion that may or may not have existed between Russia’s agents and the Trump campaign.  In a nutshell, what the Russian-based Internet Research Agency engaged in had less to do with backing a particular candidate and far more to do with spreading mass dis-information to exacerbate divisiveness among the American electorate.  And there is no better way to achieve this disruption than by using social media platforms.

The estimates reported state that 126 million Americans were exposed to paid, targeted messaging used to spread false and emotionally-charged rumors, some of which favored one candidate or another, but all of which was designed to foment political discord and volatility. As the opening testimony of Clint Watts of the Foreign Policy Research Institute stated in Part II of these Committee hearings, “Terrorists’ social media use has been acute and violent, but now authoritarians have taken it to the next level using social media more subtly to do something far more dangerous – destroy our democracy from the inside out through information campaigns designed to pit Americans against each other.”

That theme—pitting Americans against each other—cannot be overstated in this story, and I’ll return to it shortly.

A Taste of the Hearing

Coming to terms with the negative effects the “information age” can have on democracy is a reckoning long overdue, and yesterday was the first time in my experience that representatives of Silicon Valley were compelled to stifle their utopian rhetoric and admit that their products yield unintended and poisonous consequences.  In fact, early in the hearing, Senator Sheldon Whitehouse (D-RI) directly asked the three witnesses if they were going drop the “we’re just a neutral platform” posturing and accept that they have an active role to play in addressing the matters before the Committee.  All three answered the senator in the affirmative.

That in itself is big news.  The Committee’s unwillingness to accept the shrug of “neutrality” from these companies has implications for cyberlaw that go beyond addressing the immediate issue of foreign powers meddling in US elections.  For instance, it is worth remembering that while Mr. Salgado was promising that Google will take affirmative action and not hide behind a veil of neutrality regarding issues addressed in this hearing, parent company Alphabet’s juggernaut of lobbyists and PR outlets are presently trying to kill the anti-sex-trafficking bill SESTA on the grounds that it would weaken the neutral position of their platform.

There were a few awkward moments between the Committee and the witnesses regarding broader questions about the capabilities of the platforms.  Those of us who advocate certain legal boundaries online (like copyright enforcement) are used to the shell game in which the platforms boast about their capabilities to advertisers one moment (e.g. the ability to perform granular-level, targeted marketing) and in the next moment, state contradictorily that they cannot weed out toxic or illegal content because they “can’t police the internet.”

Among the highlights on this theme was Senator Al Franken’s (D-MN) entertaining inquiry directed at Facebook’s Mr. Stretch in which he asked how, with the company’s extraordinary computing capacity, it failed to “connect two dots” and consider that “American” political ads paid for with rubles might be a reason to doubt the nature of the advertiser.  In a related exchange with Mr. Salgado on the subject of Google’s capacity to weed out foreign-based political ads, Sen. Franken felt the response was too internal-policy focused and reminded the witness, “You know it’s illegal for any foreign money to be spent in our election process, right?”

These hearings mark the first time that I can remember any representative of the major platforms stating with so little equivocation that they can, will, and should implement steps to mitigate harmful content on their platforms.  Doubts were raised, however, by some members. Senator John Kennedy (R-LA) told Mr. Strech pointedly that he simply doesn’t believe Facebook can effectively vet over five million ads per month; and Senator Patrick Leahy (D-VT) accused all three platforms of responding too slowly, of missing opportunities and warning sings, and of hosting toxic content that is still online right now.

Although some Committee members raised concerns about First Amendment protections—in fact, Senator Ted Cruz (R-TX) cited incidents of alleged censorship of conservative views by the platforms themselves—neither any Committee Member of either party, nor any of the three witnesses, reiterated the generalization that removing illegal or harmful content from the platforms was fundamentally incompatible with the protection of free speech.  To the contrary, there seemed to be a very clear consensus that the manner in which these tools have been—and may continue to be—manipulated by bad actors is so harmful to democracy itself that, in context, free speech becomes a weapon of self-destruction. And that brings us back to that underlying theme and the real significance of what the Russian “hackers” did:  pit Americans against each other.

A New Kind of Literacy

I opined in a recent post that a new kind of media literacy is needed for the digital age.  Because no matter what Congress can legislate, and no matter what actions the platforms may take, people themselves are going to have to be more vigilant about the content they believe to be true, let alone share.  The mistaken expectation that the internet would be a kind of turbo-booster for democratic values comes from a reasonable, if somewhat elitist, assumption.  The theory was that if people have access to information, unfiltered by the influence of manipulators and monied gatekeepers, the collective wisdom of a fundamentally benevolent society would galvanize core democratic principles.  The manipulators would be powerless in such a fact-rich environment.

These assumptions completely overlooked some fundamental realities:  1) the platforms themselves can be used by a wide range of manipulators to manufacture false information; 2) false information that jibes with pre-conceived bias is almost impossible to recognize as false because; 3) people are driven more by emotion than by information.  The fault of the technologists, whose expertise is data, was to assume that information builds community—or at least to sell that idea.  But the truth is almost always just the opposite, even without propagandists hijacking reality.

The Opposite of Social Media

As an example of the limits of social media, I think about the story of Daryl Davis, who was featured by several news organizations shortly after the riots in Charlottesville.  Davis, a black blues musician, is responsible for over 200 men quitting the Ku Klux Klan—a journey that began, humorously enough, when a white man in a bar complimented him by saying he’d never heard “a black guy who could play piano like Jerry Lee Lewis.”  Davis’s explanation that Lewis learned everything he knew from black musicians led to a cordial conversation and the discovery that the white guy was a member of the KKK.  Thus began Davis’s decision to travel the country, meet other Klan members, and write a book about his experiences.  Along the way, many of the friends he made abandoned the organization and gave Davis their robes as penitential offerings.

Now, imagine if Davis’s first encounter with that first Klan member had been through the cold portals of social media rather than in person and through the shared experience of quintessentially American music. Add to that the trove of “information” the white guy could link to “proving” the genetic inferiority of Davis’s race. Can anyone doubt that the most likely outcome of this online exchange would be a hardening of the white guy’s racism (and perhaps a hardening of Davis’s feelings as well)?

Nothing in such and exchange would have to qualify as hate speech or any other content that would even get on the radar of the issues discussed in yesterday’s Senate hearing. It’s the kind of exchange that happens all the time—just two Americans being driven further apart by the mechanisms of a platform whose corporate mission is to “build community.”  That paradox is something the folks at Facebook et al—and those of us who use these platforms—are going to have to reconcile no matter what Congress does.


Image by alexlmx