Reports of DMCA Abuse Likely Exaggerated

In the last week of March, you might have seen a headline or two announcing that 30% of DMCA takedown requests are questionable.  And since we don’t always read beyond headlines these days, these declarations happened to be conveniently-timed for the internet industry as the April 1 deadline approached for submitting public comments to the Copyright Office regarding potential revision to Section 512 of the DMCA.  This section of the law contains the provisions for rights holders to request takedowns of infringing uses of their works online; the provisions for restoring material due to error on the notice sender’s part; and the conditions by which online service providers (OSPs) may be shielded from liability for infringements committed by their users.

The eye-catching 30% number came from a new study entitled Notice and Takedown in Everyday Practice conducted by researchers at Berkeley and Columbia; and the handful of articles I saw provided little insight into the contents of the 160-page report, which I finally had a chance to review.  The authors, Jennifer M. Urban, Joe Karaganis, and Brianna L. Schofield, cite both qualitative and quantitative data from respondent rights holders and service providers; and the big story that their report produced—the one that will stick in people’s minds—is that rights holders and OSPs have increasingly adopted automated systems (bots) to process and analyze DMCA notices, which naturally leads to a higher error rate.  Thus the narrative that will be repeated is one in which major rights holders are using tools that cannot help but chill expression through error, especially when bots can’t do things like account for fair use.  But this isn’t exactly what the report tells us, and the authors themselves acknowledge that rights holders have only increased their use of automated notice sending in response to unabated growth in large-scale online infringement.

Having reviewed the report, my big-picture observations are as follows: a) it does not justify headlines suggesting that 30% of all DMCA takedown requests are “questionable”; and b) the report especially does not support the larger bias that the types of errors it identifies are tantamount to chilling expression online.  It also should be noted that the authors do acknowledge that the majority of DMCA notices, the supposed 70% which are not flawed, are predominantly filed on behalf of major entertainment industry corporations targeting the “most obvious infringing sites.”  This does not mean errors don’t exist among these notices, but people should not read the 30% number and jump to the typical conclusion that it’s all that damn MPAA’s fault. (In fact, the MPAA provided no data for this study.)  Instead, the report seems broadly to identify some predictable inconsistencies among third-party rights enforcement organizations (REOs), which file automated notices on behalf of rights holders of varying sizes.  While it is of course desirable for all parties that REOs achieve the greatest possible accuracy and maintain best practices, including human oversight, let’s look at some of the “questionable” notices identified by the quantitative section of the report.

The study surveyed just over 108 million takedown requests filed with the Lumen (formerly Chilling Effects) database, and the authors state that 99.8% of these notices were sent to Google Search, which automatically implies a data set different from the takedown scenario most critics tend to cite (e.g. a user-generated work appearing on a platform like YouTube). The quantitative section states that 15.4% of the request notices err because the Alleged Infringing Material (AIM), does not match the Alleged Infringed Work (AIW). In some cases, keyword searches matched material that shared like terms with the wrong works (e.g. House of Usher confused with the artist Usher), while a few other examples of mismatch are a little harder to fathom.

Regardless, while this type of flawed notice may represent inefficiency and waste for the rights holders, it does not get anywhere near the concerns users might have about stifling expression online.  This is because even the errors are exclusively targeting obvious infringement by criminal websites, and the report seems to bear this out.  Even if a percentage of notices contain these types of errors but are sent to links targeting sites that host 99% infringing material, each notice is still targeting an infringing link.  If an REO sends a takedown for Infringing File A when it ought to have sent one for Infringing File B, this may be an indication that the REO needs to improve its game, but it is not a mistake that affects anyone’s expression in any context whatsoever. It’s also not the kind of mistake that tells us much about DMCA beyond the fact that rights holders have to send out far too many notices against a constant blitz of infringements.  The outnumbered zombie-fighter may be less accurate with a shotgun, but if everything he hits is a zombie, no harm no foul.

So, assuming I’m reading the data correctly, that’s more than half the 30% of “questionable” notices accounted for, since the 30% is actually rounded up from 28.4%.  So, are mistakes being made? Of course. Are all, or even most, of these mistakes affecting anyone other than rather large rights holders and really large OSPs? It doesn’t look like it.  And let me pause in this regard to remind readers that when Congress passed the DMCA in 1998, it was their expectation that OSPs would cooperate with the major rights holders to develop Standard Technical Measures to address online infringement while protecting these platforms from liability.  The OSPs continue to enjoy that protection while rights holders are still waiting for the cooperation on the infringement thing.

As mentioned, one of the tempting bullet points to be highlighted by a few reporters after the Berkeley/Columbia study went public is that, of course, bots cannot adequately analyze fair use.  This is generally true and could theoretically pose a threat to expression online, but it’s hard to tell what we actually learn on this matter from the study.  The authors state that 7.3% of the notices reviewed were flagged as “questionable” due to “characteristics that weigh favorably toward fair use.”  This does not mean, however, that nearly 8 million notices were analyzed as possible fair uses. That would be impossible–and really boring–and the report clearly states that this was not done.  To arrive at a manageable data set the report states the following:

“Sampling from and coding a pool of 108 million takedown requests required building a custom database and “coding engine” that allowed us to enter and query inputs about any one takedown request. These tools allowed in-depth investigation of the notices and their component parts by combining available structured data from the form-based submissions with manual coding of characteristics of the sender, target, and claim. We also designed a customized randomization function that supports both sampling across the entire dataset and building randomized “tranches” of more targeted subsets while maintaining overall randomness.” 

The percentage of “questionable” notices is based on a random sampling of 1826 notices that were manually reviewed, and I leave it to experts in copyright law and/or statistical analysis to comment on the methodology. *[see note below]* With regard to fair use, the report states, “Flagged requests predominantly targeted such potential fair uses as mashups, remixes, or covers, and/or a link to a search results page that included mashups, remixes, and/or covers.” It also flagged ringtones and cases in which the “AIM used only a small portion of the AIW” or uses in which the AIM appeared to be made for “educational purposes.”’

Because no single factor is dispositive in a fair use analysis—and none of the criteria identified by the report is automatically a fair use—what the study presents is nearly 8 million notices that could be candidates for a proper fair use analysis but which might not provide so much as a single fair use defense that would hold up in court. If that seems unlikely, keep in mind that 8 million is a tiny number when we’re talking about the internet. It’s important to maintain perspective when these kinds of reports generate buzz that we’re seeing a trend toward “censorship” in a universe that comprises trillions of daily expressions, including millions of infringements that for various reasons do not even trigger a DMCA takedown request.  Are there fair uses taken down?  It would be absurd to expect otherwise.  But neither this report, nor any other prior study or testimony of which I am aware demonstrates that this problem is widespread.  And as I pointed out in detail in this post, the user of a work online has the final say (absent litigation) by means of the counter notice procedure in the DMCA.

The Berkeley/Columbia report notes a relatively low rate of counter notice filings, suggesting that users either don’t know they have a right to make fair uses of works or are afraid to assert that right via counter notice because the rights holder might be a big media company with big attorneys wielding big statutory penalties.  This assessment comes entirely from the qualitative section of the report, which comprises interviews with (mostly anonymized) respondent OSPs and rights holders.  The report does not include interviews with users and it does not appear to consider the possibility that the low rate of counter notices might correspond with the high rate of indefensible infringements.

The authors state, “In one OSP’s view, the prospect of sending users up against media company attorneys backed by statutory copyright penalties ‘eviscerated the whole idea of counter notice.’”  But including this statement from an unnamed OSP representative contradicts other anecdotal evidence published in the report, like this observation by the authors: “Several respondents said that the most consistent predictor of a low-quality notice was whether it came from a first-time, one-off, or low-volume sender.” In other words, the most likely senders of “questionable” notices seem to be parties other than the big media companies with their scary attorneys, including entities that have no business using DMCA at all because copyright infringement is not the issue.

Based on conversations I have had with pro-copyright experts, the report is fair in suggesting that the language in the DMCA, which contains words like “under penalty of perjury,” can frighten people away from using counter notices, particularly if a takedown request comes from even a mid-size business and the recipient is an individual. In these cases, it is reasonable to imagine the target of a notice might be apprehensive about asserting his/her right to use a counter notice without consulting legal counsel.  This is a valid point for consideration, and surely, well-intended individuals making creative or expressive uses of works should not be frightened into silence by virtue of their financial status.  But it is important to maintain perspective with regard to which segment of the market we’re looking at and what type of players are involved in a potential conflict.  In many cases cited by critics of DMCA takedown procedures, the purposely abusive notices tend to be anomalies, they often occur in foreign markets with weaker civil liberties than ours, or they are often remedied without litigation.

Meanwhile, individual rights holders of limited financial means face their own apprehensions and challenges in asserting their right to protect their works. As rights holders of all sizes have demonstrated repeatedly—and this report even addresses the problem—the ability for multiple, random users to file counter notices and restore clearly infringing material—and for OSPs to monetize those uses with impunity—puts rights holders at a tremendous disadvantage. It should also be recognized that none of these uses (e.g. a whole TV show or unlicensed song uploaded to YouTube) could rationally be defined as UGC (User Generated Content) when the uploaders have not generated anything at all. Hence, even the original intent of DMCA is not being fulfilled when the safe harbor shield continues to sustain these types of infringements.

It would take many more pages to fully delve into the details of the Berkeley/Columbia report, and the authors do fairly cite several challenges faced by rights holders in applying DMCA. Although the study is partly funded by Google, that alone does not disqualify its contents for me.  I cite reports funded by MPAA and other rights holding entities and think a study should stand or fall on its own merits. This one reveals some valuable insight; but it does not seem to adequately support those big headlines about DMCA abuse, which will surely be repeated in comment threads, blogs, and future articles.


*NOTE:  This has been altered from original publication based on comments (see below) from one of the report’s authors, Jennifer Urban. Originally, I stated that the team had used an algorithm to identify notices that may implicate fair use, and this was an error on my part.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)