Google Books & The Semantic Maze of Fair Use

Photo by author.

This week the Supreme Court declined to consider the Authors Guild v Google case, which lets stand the Second Circuit Court ruling that Google’s use of scanned published works for its search tool Google Books constitutes a fair use.  Various pundits and advocates have hailed this as a victory for the fair use principle.  In fact, I saw a headline the other day on Facebook that began with the words “Fair Use Wins …”, and although the decision is unquestionably a win for Google, the fair use principle actually remains mired in a semantic confusion about which the high court might have at least provided some clarity.  It’s all about the word transformativeness.

The fair use doctrine was added to the Copyright Law as part of the 1976 Act, and its original intent was to protect various types of expressions—commentary, parody, education, artistic remixes, reportage, etc.—that by necessity made limited and conditional uses of copyrighted works.  I’ve written longer posts about fair use doctrine in general, and won’t repeat all that here, but readers will remember that there are four interrelated factors to be considered* in assessing whether a use constitutes a fair use.  But in 1994, in a landmark case that was heard by the Supreme Court called Campbell v Acuff-Rose Music, the fair use doctrine grew a new appendage called “transformativeness” that has, in the age of the internet, not only become something of a fifth factor that seems to override consideration of the other four, but also has not been clearly defined as a term of art in legal practice.

As I continue to learn from my attorney friends, some of the words we use in everyday language become terms of art in the legal world, which generally means that court rulings have shaped, narrowed, or expanded the dictionary definition of key terms.  For instance, based on the current ruling by a federal court, the word articles can only mean “physical objects” with regard to the International Trade Commission’s authority to prohibit the importation of illegal goods.  So, if Congress wants to grant that body the authority to restrict the importation of digital data for illegal purposes, they’re probably going to have to rewrite the law.  (More about that another time, perhaps.)

The concept of “transformativeness” in fair use parlance was introduced by Judge Pierre Leval in his paper “Toward a Fair Use Standard” published in the Harvard Law Review in 1990, and coincidentally it was Leval who wrote the decision in the Second Circuit’s ruling in Authors Guild v Google.  But even though the “father of transformativeness” himself has ruled in this case, there is still much confusion about the term and what it means when considering fair use. As Thomas Sydnor of the Center for Internet, Communications and Technology Policy at the American Enterprise Institute writes about the situation:

“As cases applying this judge-made “transformativeness”-based approach to fair use accumulate, that term becomes increasingly incoherent, inconsistent, and counterintuitive. Collectively, its incoherence(s) now threaten to turn what was once a productively flexible multi-factor balancing test into little more than a perfunctory recitation of factors ending in judicial ipsa dixit – “because I said so.” Under such circumstances, rule of law cannot persist.”

Sydnor further points out that the word transform already exists in the 1976 Copyright Act in reference to the preparation of “derivative works,” which is another term of art to describe works such as spin-offs or adaptations into other media. These rights belong exclusively to the copyright owner of the original work and should not be confused with the more casual way we might use the word derivative to describe, or even criticize, a work that is mimicking some other work.  For instance, the above-mentioned Campbell case involves a work of parody that we might describe in common language as derivative, but not so in the context of copyright law.

Campbell v Acuff-Rose Music involved a new, expressive work, specifically 2 Live Crew’s raunchy parody of the song “Oh, Pretty Woman” co-written and originally performed by Roy Orbison.  The court held in Campbell that “the more transformative the new work, the less will be the significance of other factors.”  In this case, the court is referring to the extent to which 2 Live Crew “transformed” the original song to make a new song.  By contrast, though, Google does not “transform” any of the original works to create new expressions but instead uses the contents of the works to create a new search service called Google Books.

So, with these two rulings, we are looking at two significantly distinct definitions of the word transformativeness.  The first refers to modification of an expressive work in order to make a new expressive work.  The second implicitly refers to transformation of the external world (society) by the introduction of some new capacity (i.e. function) it did not have before.  This is particularly relevant because the language used by SCOTUS, asserting that “transformativeness” should “lessen the significance of the other factors,” can only rationally be applied—if the spirit of fair use doctrine is to be kept intact—to the first definition in which an original work is “transformed” to create a new, expressive work.  In the second usage of the word, in which the external world is assumed to be transformed by some new functional use, then “transformativeness” becomes too heavily weighted against the other factors, thus giving (for instance) a giant, wealthy service provider extraordinary latitude to define just about anything it does as socially “transformative.”

If the courts are going to apply this second definition of “transformativeness,” then it seems the consideration ought not to carry any more weight than the other factors because the second definition provides a basis for large-scale, corporate-funded uses of millions of works in a way that the first definition does not.  In other words Google Books may be deemed a fair use in the end, but it is not sensible that the application of “transformativeness” in Campbell be applied.  As it stands, the courts appear to be giving the same weight to “transformativeness” while using two very different definitions of the word.

Semantically speaking, I would argue that transformative is not exactly the right word to use when one specifically wants to describe some measure of modification to an existing thing like a creative expression.  The term is problematic because it begs exactly the confusion we now have in the courts—because transformative more properly describes the effects of an invention or expression to the external world (e.g. electricity was transformative in that it made modern society). While it would not be wrong in common parlance to describe, for instance, Jeff Buckley’s rendition of Leonard Cohen’s “Hallelujah” as “transformative,” even this usage would generally tend to convey that both song and listener are in some way transformed.  But in law, this is too vague.  This is why the attorneys refer to a term of art –a definition that is established within the language of the law that may or may not conform to everyday usage.  Sydnor points out that Leval himself provides little guidance in this regard when he quotes the judge thus:

“The word “transformative” cannot be taken too literally as a sufficient key to understanding the elements of fair use. It is rather a suggestive symbol for a complex thought….”

 “[T]he word “transformative,” if interpreted too broadly, can also seem to authorize copying that should fall within the scope of an author’s derivative rights. Attempts to find a circumspect shorthand for a complex concept are best understood as suggestive of a general direction, rather than as definitive descriptions.”

Right. I’m no legal scholar, but I think the concept “transformative” is a troublemaker.

Because the precedent SCOTUS ruling in Campbell is based on the use of “transformativeness” to describe the modification of an expressive work, it would make sense to settle upon this definition and to seek another term for considering functional uses akin to Google Books. As CEO of Copyright Alliance Keith Kupferschmid writes in a post on the organization’s website:

“The fair use doctrine is an equitable doctrine, but in functional use cases it hasnt worked that way because the transformative use test is ill equipped to effectively balance the competing interests at stake in these cases.  Fair use analysis should take into account not only the interests of owners and users but also the underlying policy objectives of the copyright law.  To account for these factors in a reasonable and balanced way, it is time for the courts to begin using a functional use test.”

Unfortunately for rights holders, the confusion about “transformativeness” that leaks into general consciousness results in a casual logic, which assumes that simply changing the context of a work, like placing a photograph on one’s Facebook page, is “transformative” enough to make a use fair.  Google Books is a misstep in that direction, and if this becomes the application of fair use, then that’s the ballgame.  There are no copyrights left. I can take your songs or images, put them on this blog, call it “transformative”, and get away with it.  That may be an attractive proposal to the internet industry, but it is far from the original intent of fair use doctrine in the copyright law, which was to protect expression, and it would have disastrous effects on the professional creative industry as we know it.


*Changed from original publication, which stated that the factors are considered by a three-judge panel.  As pointed out by Anonymous commenter, this is only true in an appellate court. A mistake I made in haste owing to the fact that many famous fair use cases are famous because they’ve gone to higher courts.

Reports of DMCA Abuse Likely Exaggerated

In the last week of March, you might have seen a headline or two announcing that 30% of DMCA takedown requests are questionable.  And since we don’t always read beyond headlines these days, these declarations happened to be conveniently-timed for the internet industry as the April 1 deadline approached for submitting public comments to the Copyright Office regarding potential revision to Section 512 of the DMCA.  This section of the law contains the provisions for rights holders to request takedowns of infringing uses of their works online; the provisions for restoring material due to error on the notice sender’s part; and the conditions by which online service providers (OSPs) may be shielded from liability for infringements committed by their users.

The eye-catching 30% number came from a new study entitled Notice and Takedown in Everyday Practice conducted by researchers at Berkeley and Columbia; and the handful of articles I saw provided little insight into the contents of the 160-page report, which I finally had a chance to review.  The authors, Jennifer M. Urban, Joe Karaganis, and Brianna L. Schofield, cite both qualitative and quantitative data from respondent rights holders and service providers; and the big story that their report produced—the one that will stick in people’s minds—is that rights holders and OSPs have increasingly adopted automated systems (bots) to process and analyze DMCA notices, which naturally leads to a higher error rate.  Thus the narrative that will be repeated is one in which major rights holders are using tools that cannot help but chill expression through error, especially when bots can’t do things like account for fair use.  But this isn’t exactly what the report tells us, and the authors themselves acknowledge that rights holders have only increased their use of automated notice sending in response to unabated growth in large-scale online infringement.

Having reviewed the report, my big-picture observations are as follows: a) it does not justify headlines suggesting that 30% of all DMCA takedown requests are “questionable”; and b) the report especially does not support the larger bias that the types of errors it identifies are tantamount to chilling expression online.  It also should be noted that the authors do acknowledge that the majority of DMCA notices, the supposed 70% which are not flawed, are predominantly filed on behalf of major entertainment industry corporations targeting the “most obvious infringing sites.”  This does not mean errors don’t exist among these notices, but people should not read the 30% number and jump to the typical conclusion that it’s all that damn MPAA’s fault. (In fact, the MPAA provided no data for this study.)  Instead, the report seems broadly to identify some predictable inconsistencies among third-party rights enforcement organizations (REOs), which file automated notices on behalf of rights holders of varying sizes.  While it is of course desirable for all parties that REOs achieve the greatest possible accuracy and maintain best practices, including human oversight, let’s look at some of the “questionable” notices identified by the quantitative section of the report.

The study surveyed just over 108 million takedown requests filed with the Lumen (formerly Chilling Effects) database, and the authors state that 99.8% of these notices were sent to Google Search, which automatically implies a data set different from the takedown scenario most critics tend to cite (e.g. a user-generated work appearing on a platform like YouTube). The quantitative section states that 15.4% of the request notices err because the Alleged Infringing Material (AIM), does not match the Alleged Infringed Work (AIW). In some cases, keyword searches matched material that shared like terms with the wrong works (e.g. House of Usher confused with the artist Usher), while a few other examples of mismatch are a little harder to fathom.

Regardless, while this type of flawed notice may represent inefficiency and waste for the rights holders, it does not get anywhere near the concerns users might have about stifling expression online.  This is because even the errors are exclusively targeting obvious infringement by criminal websites, and the report seems to bear this out.  Even if a percentage of notices contain these types of errors but are sent to links targeting sites that host 99% infringing material, each notice is still targeting an infringing link.  If an REO sends a takedown for Infringing File A when it ought to have sent one for Infringing File B, this may be an indication that the REO needs to improve its game, but it is not a mistake that affects anyone’s expression in any context whatsoever. It’s also not the kind of mistake that tells us much about DMCA beyond the fact that rights holders have to send out far too many notices against a constant blitz of infringements.  The outnumbered zombie-fighter may be less accurate with a shotgun, but if everything he hits is a zombie, no harm no foul.

So, assuming I’m reading the data correctly, that’s more than half the 30% of “questionable” notices accounted for, since the 30% is actually rounded up from 28.4%.  So, are mistakes being made? Of course. Are all, or even most, of these mistakes affecting anyone other than rather large rights holders and really large OSPs? It doesn’t look like it.  And let me pause in this regard to remind readers that when Congress passed the DMCA in 1998, it was their expectation that OSPs would cooperate with the major rights holders to develop Standard Technical Measures to address online infringement while protecting these platforms from liability.  The OSPs continue to enjoy that protection while rights holders are still waiting for the cooperation on the infringement thing.

As mentioned, one of the tempting bullet points to be highlighted by a few reporters after the Berkeley/Columbia study went public is that, of course, bots cannot adequately analyze fair use.  This is generally true and could theoretically pose a threat to expression online, but it’s hard to tell what we actually learn on this matter from the study.  The authors state that 7.3% of the notices reviewed were flagged as “questionable” due to “characteristics that weigh favorably toward fair use.”  This does not mean, however, that nearly 8 million notices were analyzed as possible fair uses. That would be impossible–and really boring–and the report clearly states that this was not done.  To arrive at a manageable data set the report states the following:

“Sampling from and coding a pool of 108 million takedown requests required building a custom database and “coding engine” that allowed us to enter and query inputs about any one takedown request. These tools allowed in-depth investigation of the notices and their component parts by combining available structured data from the form-based submissions with manual coding of characteristics of the sender, target, and claim. We also designed a customized randomization function that supports both sampling across the entire dataset and building randomized “tranches” of more targeted subsets while maintaining overall randomness.” 

The percentage of “questionable” notices is based on a random sampling of 1826 notices that were manually reviewed, and I leave it to experts in copyright law and/or statistical analysis to comment on the methodology. *[see note below]* With regard to fair use, the report states, “Flagged requests predominantly targeted such potential fair uses as mashups, remixes, or covers, and/or a link to a search results page that included mashups, remixes, and/or covers.” It also flagged ringtones and cases in which the “AIM used only a small portion of the AIW” or uses in which the AIM appeared to be made for “educational purposes.”’

Because no single factor is dispositive in a fair use analysis—and none of the criteria identified by the report is automatically a fair use—what the study presents is nearly 8 million notices that could be candidates for a proper fair use analysis but which might not provide so much as a single fair use defense that would hold up in court. If that seems unlikely, keep in mind that 8 million is a tiny number when we’re talking about the internet. It’s important to maintain perspective when these kinds of reports generate buzz that we’re seeing a trend toward “censorship” in a universe that comprises trillions of daily expressions, including millions of infringements that for various reasons do not even trigger a DMCA takedown request.  Are there fair uses taken down?  It would be absurd to expect otherwise.  But neither this report, nor any other prior study or testimony of which I am aware demonstrates that this problem is widespread.  And as I pointed out in detail in this post, the user of a work online has the final say (absent litigation) by means of the counter notice procedure in the DMCA.

The Berkeley/Columbia report notes a relatively low rate of counter notice filings, suggesting that users either don’t know they have a right to make fair uses of works or are afraid to assert that right via counter notice because the rights holder might be a big media company with big attorneys wielding big statutory penalties.  This assessment comes entirely from the qualitative section of the report, which comprises interviews with (mostly anonymized) respondent OSPs and rights holders.  The report does not include interviews with users and it does not appear to consider the possibility that the low rate of counter notices might correspond with the high rate of indefensible infringements.

The authors state, “In one OSP’s view, the prospect of sending users up against media company attorneys backed by statutory copyright penalties ‘eviscerated the whole idea of counter notice.’”  But including this statement from an unnamed OSP representative contradicts other anecdotal evidence published in the report, like this observation by the authors: “Several respondents said that the most consistent predictor of a low-quality notice was whether it came from a first-time, one-off, or low-volume sender.” In other words, the most likely senders of “questionable” notices seem to be parties other than the big media companies with their scary attorneys, including entities that have no business using DMCA at all because copyright infringement is not the issue.

Based on conversations I have had with pro-copyright experts, the report is fair in suggesting that the language in the DMCA, which contains words like “under penalty of perjury,” can frighten people away from using counter notices, particularly if a takedown request comes from even a mid-size business and the recipient is an individual. In these cases, it is reasonable to imagine the target of a notice might be apprehensive about asserting his/her right to use a counter notice without consulting legal counsel.  This is a valid point for consideration, and surely, well-intended individuals making creative or expressive uses of works should not be frightened into silence by virtue of their financial status.  But it is important to maintain perspective with regard to which segment of the market we’re looking at and what type of players are involved in a potential conflict.  In many cases cited by critics of DMCA takedown procedures, the purposely abusive notices tend to be anomalies, they often occur in foreign markets with weaker civil liberties than ours, or they are often remedied without litigation.

Meanwhile, individual rights holders of limited financial means face their own apprehensions and challenges in asserting their right to protect their works. As rights holders of all sizes have demonstrated repeatedly—and this report even addresses the problem—the ability for multiple, random users to file counter notices and restore clearly infringing material—and for OSPs to monetize those uses with impunity—puts rights holders at a tremendous disadvantage. It should also be recognized that none of these uses (e.g. a whole TV show or unlicensed song uploaded to YouTube) could rationally be defined as UGC (User Generated Content) when the uploaders have not generated anything at all. Hence, even the original intent of DMCA is not being fulfilled when the safe harbor shield continues to sustain these types of infringements.

It would take many more pages to fully delve into the details of the Berkeley/Columbia report, and the authors do fairly cite several challenges faced by rights holders in applying DMCA. Although the study is partly funded by Google, that alone does not disqualify its contents for me.  I cite reports funded by MPAA and other rights holding entities and think a study should stand or fall on its own merits. This one reveals some valuable insight; but it does not seem to adequately support those big headlines about DMCA abuse, which will surely be repeated in comment threads, blogs, and future articles.


*NOTE:  This has been altered from original publication based on comments (see below) from one of the report’s authors, Jennifer Urban. Originally, I stated that the team had used an algorithm to identify notices that may implicate fair use, and this was an error on my part.

Paywalls, vinyl, and other dead issues.

It’s been a longstanding bias of mine that the generation we call digital natives—the kids who’ve grown up practically hard-wired to the network—will steadily gravitate toward classic, analog, and tangible media and experiences, not merely as a fadish expression of hipsterism, but as a natural result of maturing tastes and dwindling leisure time.  One of the first posts I wrote for this blog, What I’d tell my own kids about piracy. Why scarcity is a good thing. made a case for the value of limiting one’s choices rather than indulging in a kind of media gluttony implicit in the presumed need to seek out illegal channels as though the legal ones had nothing to offer.  People shared that post a fair bit, homing in on the assertion that whatever is worth your time is also worth your money.

We are, of course, seeing some trends toward “old” experiences, like a renewed interest in vinyl records, which will not likely replace streaming and digital downloads but may indicate that fans are discovering (or rediscovering) that there can be more to enjoying recorded music than just hearing it.  Even the process of browsing in a store for LPs is one that I always considered a satisfying sensory experience prior to the invention of the CD. Like turning pages in a large picture book, with each album displaying about 160 square inches of cover art in contrast to the squinty 25 afforded by a CD jewel case.  I always liked that flipping through albums was a mostly silent activity other than faint woofs of air as one leaned each record forward. By contrast, the grating clack-clack of sorting through small plastic cases always sounded and felt to me more like work.

Once home with a new vinyl album one must perform a few steps in collaboration with a mechanical object, some motion which beg a gentle touch that imbues the preparation with an almost ritualistic quality, complimenting the sense of time set aside to listen actively to new music.  For all the convenience of digital access, it doesn’t always satisfy the human need to experience life beyond the perfunctory.  Fast food is convenient and cheap, too; but there’s a reason it doesn’t replace fine dining just as there is a reason a fine meal assumes a certain presentation and atmosphere to complement the meal.  And for experiences—yes, even content—that are truly desirable, people are willing to pay when that is the only way to have them.

Certainly, The New Yorker magazine is fine-dining as publications go, and it turns out that its readers are very much willing to pay for it—even online.  According to Jeffrey A. Trachtenberg at the Wall Street Journal, when The New Yorker began experimenting with a paywall that would go up once a visitor had accessed a limit of six free stories in a single month, readership increased rather than declined.  “Instead of deterring readers, the number of unique visitors rose to 9.7 million in October 2015 from 5.5 million a year earlier, the month before the paywall was implemented, …” reports Trachtenberg.

I can’t say I’m surprised that, despite the conventional free-culture “wisdom” that’s been shouted at the market for nearly two decades, we find evidence that consumers are not only capable of recognizing the qualities they want in “content” but are even willing to pay for it.  Granted, the readership of The New Yorker is a devout audience that has been cultivated for more than a century, and it is currently the only property in the Condé Nast portfolio to so far experiment with a paywall. But for the same reasons a new vinyl store opened in my local mall while other retail is shuttering, the market may yet prove that there is no one new, digital model that entirely disrupts and replaces all that has come before.  Just maybe the producers and consumers of high-value journalism, music, film, TV, etc. will be best served by various combinations of new and old that are a little more complex than just putting stuff out there, signing up for an digital ad service account, and selling merch on the side.

In contrast to The New Yorker, the equally venerable publication The Atlantic was the first to “go digital”, according to this 2011 article by Lauren Indvik for Mashable.  In January of  2008, The Atlantic dropped its paywall and developed a holistic, digital strategy for both publication and advertising.  As Indvik describes, the The Atlantic’s history as a platform for editorial made it a natural for the web, but the road to profitability involved a comprehensive and creative strategy to develop advertising “experiences” for premium brands across print, digital, events, and mobile.  “Digital has proved tough terrain for many traditional advertisers, who have been forced to compete against highly targeted search and display networks, such as Google’s,” writes Indvik.

Of course, the success of both The New Yorker and The Atlantic are entirely dependent upon the quality of the work on the page, even if the two entities commoditize distribution through different models.  And the only way to maintain that quality is either a sustainable high-value ad strategy or direct sales to consumers, or some combination of the two.  This was true before the free-culture rhetoric disrupted common sense, and it’s still true.

As New Yorker editor David Remnick says in the WSJ article, “Information doesn’t want to be free, it wants to get around freely.”  Or, as may be inferred from the renewed interest in the vinyl experience, maybe the creative and informative experiences consumers value cannot be described so homogeneously as “information” the way many tech-utopians chose to interpret part of Stewart Brand’s famous quote in order to justify devaluation of the work itself. Maybe consumers don’t demand that everything be free, just that it be good.


In a related story (as reported in The New Yorker of course), Kodak drew considerable crowds at this year’s Consumer Electronics Show in Las Vegas with the introduction of a contemporary version of the Super 8 camera.  Amid a bevy of entrepreneurs offering “smart” devices that consumers may prefer to leave “dumb”, Kodak’s debut of a new way to make old home movies on celluloid is an unexpected move that may actually work. Read the full story here.