The EFF Campaign Against DMCA Section 1201 Perishes in the DC Circuit

section 1201

The First Amendment protects the right to read books but not the right to break into a bookstore for the purpose of reading—not even if the goal is to quote a passage from a book in a manner that would be fair use under copyright law. The hypothetical, lawful use of the book’s contents to produce protected expression does not make the law prohibiting trespassing into the store a violation of the speech right. Most reasonable people can understand this distinction, but for about 18 years, the Electronic Frontier Foundation (EFF) has tried to prove that common sense is wrong.

Ever since literary works, sound recordings, audiovisual works, etc. went digital, the concept of “digital locks” used to protect lawful access to these materials has vexed the EFF, which launched a campaign and lawsuit in 2016 to prove that the law against breaking said locks is unconstitutional. Filing suit on behalf of researcher Matthew Green and product developer Andrew Huang, the EFF has argued that Section 1201 of the Digital Millennium Copyright Act (DMCA) violates the speech right because circumvention of technical protection measures (TPM) may sometimes be done to achieve forms of protected expression that would be defensible under the fair use exception.

TPMs generally consist of code used to enforce lawful access to digitally distributed works like eBooks or streaming services, and §1201 prohibits circumventing TPM and/or trafficking in devices primarily designed for circumvention. By law, the Librarian of Congress (really the Copyright Office) conducts a triennial rulemaking proceeding to consider applications for, and grant exemptions to, §1201 for purposes such as research and certain educational or journalistic uses of the encrypted works. You can read posts here and here for background on the EFF’s case, but the bottom line is that the appellate court last week soundly rejected the claim that 1201’s “legitimate sweep” functions as a “speech licensing” law.

Among the court’s determinations, it held that the government’s interest in preventing “digital trespass” properly restricts a wide range of conduct that has no expressive purpose; the First Amendment does not guarantee unfettered access to expressive works; the plaintiffs in name make no showing that their protected expression is being chilled; and the various hypothetical examples presented by the EFF are answered by legal forms of access to works that do not require circumvention of TPM. More than a few of the court’s responses demonstrate why the EFF has tried inaptly to portray an anti-trespass law as a speech law. For example…

A trespass law undoubtedly affects some expressive conduct, as when political protestors trespass to stage a demonstration where it might have maximal impact. Similarly, the DMCA’s anticircumvention provision might preclude a student from circumventing technological measures to cut a high-quality clip of a copyrighted feature film to use in his class presentation. But trespassing is not “necessarily associated with speech,” because laws prohibiting trespass also “apply to strollers, loiterers, drug dealers, roller skaters, bird watchers, soccer players, and others not engaged in constitutionally protected conduct.”

As the court explains, §1201 likewise applies to a range of parties with an interest in circumvention for both lawful and unlawful purposes, but expression is not the basis on which the law operates. Going back to the bookstore, it is simply illegal to break in at all, regardless of whether the intent is to read, cite a book, or ransack the place. The fact that the vandal will face charges not attributable to the reader has no bearing on the trespass violation they both committed.

I also want to highlight the court’s response to the allegation that the §1201 rulemaking process is itself an unconstitutional prior restraint on speech as indicative of EFF’s chronic misstatements about fair use. The court writes, “An irony of appellants’ challenge to the DMCA is that the triennial rulemaking exemption scheme—which identifies in advance and immunizes categories of likely fair uses—may be less chilling of the fair uses to which it applies than the after-the-fact operation of the fair use defense itself.”

In other words, arguing a fair use defense requires litigation and uncertainty in contrast to a rule by the Librarian that a given use has been granted an exemption. The Library has granted a broad range of exemptions to §1201, and as this opinion notes, an exemption granted to a single petitioner (e.g., a documentary filmmaker or teacher) applies to all parties in that class with the same interest in circumventing TPM.

Finally, the court concludes that the rulemaking proceeding is not above judicial review—that a petitioner who believes the Library has made a content- or viewpoint-based decision may still bring a First Amendment complaint to the courts on that basis, but this does not alter the finding that the law itself withstands constitutional scrutiny. Never say never, I guess, but I predict this alleged controversy is now a settled matter—that EFF has wrung all the value it can from this campaign and will need to find a new anti-copyright windmill on which to break its lances.

The Campaign to Defend Generative AI

generative ai

I have not written steadily about AI and copyright because, frankly, it’s exhausting. Not quite as exhausting as watching the state of the Republic overall, but almost as relentlessly incoherent and repetitive. For instance, Winston Cho for the Hollywood Reporter describes a PR and lobbying campaign by the tech coalition Chamber of Progress to defend the importance of generative AI (GAI). The article quotes founder and CEO Adam Kovacevich thus:  “Gen AI is a net plus for creativity overall. It’s expanding access to creative tools for more and more people and bypassing a lot of the traditional gatekeepers.”

That GAI may yield some beneficial tools for creators is plausible, but the whole “access” and “gatekeepers” rhetoric is a misguided anachronism from a group calling itself the Chamber of Progress. Perhaps “Confederacy of Tech Overlords” was too on the nose, but the generalized argument that GAI represents a “democratic” shift away from gatekeepers, stands on the rubble of experiments that have already failed. I doubt there is a professional creator left who hasn’t figured out that Big Tech’s promise to liberate them from traditional gatekeepers is like a human trafficker promising his next victim a job in a foreign country. Whatever was imperfect about the old models, the new models are more exploitative and hazardous for the average creator.

More precisely, while the alleged “liberation” from older distribution channels might have seemed attractive, GAI is about production, and I am confused as to who the “gatekeepers” would be on the production side of the equation. To the extent, say, Midjourney might enable me to illustrate or paint without any drafting or painting skills, the “gatekeeper” is who exactly? Nature failing to gift me with those skills? Or if we think big, and I can make a whole motion picture without ever turning on a camera, I still fail to see who the “gatekeeper” is in the overreaching promise from the tech industry.

Despite how cutting-edge and “essential” GAI is supposed to be, Big Tech has nothing fresh to say in its advocacy. The theme of “democratization” is the same weather-beaten argument they’ve been flogging for years, one that has proven disastrous for information and the state of real democracy—and which GAI can only make worse. Nevertheless, the Chamber of Progress campaign, as reported by Cho, seeks to promote a sweeping policy that AI developers should be broadly shielded from liability, including copyright infringement claims.

The question of copyright infringement for ingesting works for machine learning (ML) is currently at the heart of several lawsuits. I’ve lost track of them all, but arguably the most solid claim to date is New York Times v. OpenAI et al. because the evidence of copying (i.e., that what went into the model came out of the model) is so compelling. On the other hand, it is worth watching those cases where “reproduction” is less evident and, therefore, where the question may be more thoroughly addressed as to whether ML is a purpose that favors fair use of protected works.

As we have seen in defense of social platforms, Big Tech will spray the blogosphere with the term “fair use,” and copyright antagonists (mainly in academia) will echo the broad claim that of course ML is fair use. Notwithstanding the bugaboo that the fair use doctrine rejects the notion of a general exemption, I would argue that the case law points the other way, including the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. To the limited extent that opinion addresses the ML question at all, its reigning in of the “transformativeness” test is more likely to disfavor the AI developers. Big Tech’s claim is that GAI is broadly “transformative” as a technological accomplishment, but Warhol and other decisions reject such a sweeping interpretation of that aspect of fair use factor one.

Further, as argued in this post, I remain unconvinced that GAI necessarily advances the purpose of copyright to promote new authorship as a matter of doctrine. For instance, if a given work created by GAI cannot be protected by copyright, then the material is, by definition, not a work of “authorship.” As such, this purpose should doom a fair use defense, in my view. Regardless, Big Tech will not be satisfied with the outcomes of any lawsuits, even if the developers win some. What they want is blanket immunity for infringement liability and an affirmation that GAI is truly as important as they say it is. That’s why this paragraph in the Hollywood Reporter story caught my attention:

In comments to the Copyright Office, which has been exploring questions surrounding the intersection of intellectual property and AI, Chamber of Progress argued that Section 230 – Big Tech’s favorite legal shield – should be expanded to immunize AI companies from some infringement claims.

Why highlight that? Because the absence of legal foundation is telling. Not only does Title 47 Section 230 have nothing to do with copyright infringement, but both that law and its copyright cousin, Title 17 Section 512, address the subject of users uploading material to platforms. Neither law says anything about scraping the web to feed material into an AI model for the purpose of ML. Nevertheless, it is clear from reading the actual comments by Chamber of Progress to the Copyright Office that Big Tech recommends policymakers take lessons from both statutes to carve out new liability shields to support the advancement of AI.

Despite the fact that neither §512 nor §230 has proven effective in limiting copyright infringement or dangerously harmful material online, the Chamber of Progress comments reprise Big Tech’s unfounded talking points regarding both statutes. Written by counsel Jess Miers, the comments repeat the false allegation that §512 fosters rampant, erroneous takedowns and also argues that because of §230, “most UGC services go to great lengths to proactively clean-up awful content and provide a safe and trustworthy environment for their users.” Not only will my friends and colleagues fighting Image-Based Sexual Abuse, online hate, and scams be very surprised to learn that, but so will Congress.

One of the scant points of agreement on Capitol Hill these days is that lawmakers have grown weary of liability shields for Big Tech, which has done a poor job of mitigating the worst harms facilitated by their platforms. Section 230 is so ripe for amendment that I’m surprised the Chamber of Progress invoked it, let alone in comments to the Copyright Office which only deals with, y’know, copyright law. More broadly, though, when GAI implies myriad harms beyond copyright infringement, the last thing Congress should do is grant Big Tech more latitude to do whatever it wants in the name of “progress.”  We tried that approach. It sucks.

Fraud in Music Streaming on Legit Platforms

By now, many people who pay attention to artists’ rights have read the David Segal New York Times story published on January 13 about the amateur folk duo Bad Dog discovering their songs on major streaming platforms, but with different titles and attributed to a different creator. In what should be a surprise to nobody, it is easy to game the music streaming system and siphon from the revenue pool, even if you’ve never composed or recorded a song in your life. It’s a classic case of The Internet Giveth, and The Internet Taketh Away—because many DIY tools promoted to help new artists launch careers can be used by bad actors engaging in fraud.

“David Post and Craig Blackwell have been devoted amateurs for decades, and they’re long past dreams of tours and limos,” Segal begins. Post and Blackwell are, oddly enough, both D.C. attorneys—and cyberlaw and copyright law attorneys to boot. Although they were more interested in regaining control of their music than the revenue, their difficulties point to the fact that control is everything, especially if the artist does care about revenue.

The problem lies in the fact that anybody can create an account with a music publisher/distributor, rename and reattribute a bunch of pirated tracks, and then upload the songs to multiple platforms to hijack the revenue that belongs to the real artists. Based on the Segal article, it seems that not all publisher/distributors are equal when it comes to verifying authorship or ownership of the tracks. The article cites the service called Level used in the Bad Dog incident, and I cannot comment on the specific anti-fraud efforts of all these services.

Naturally, this scam won’t work with mega hits for a variety of reasons, but for a niche indie like Bad Dog—or more critically for the new artist who is trying to start a music career—songs that are relatively obscure make a perfect target. The scammer won’t earn much from a small heist of songs, but at scale, the dividends can obviously provide sufficient passive income to make the “effort” worthwhile. I recommend the NYT article for a full account, but two segments struck me as deserving of comment—one editorial, and one semi-mercenary.

Citing these segments out of order, first is a comment by David Post, referring to the evolution of the Digital Millennium Copyright Act (DMCA) and the appearance of Napster. “In 1997, I don’t think people were thinking about this automated operation that just sucks up unprotected material, rejiggers it to make it unfindable and uploads to platforms where they can start monetizing it. That wasn’t on anybody’s radar.” For context, Post alludes earlier in the article to his own copyright skepticism, which echoes views many would describe as “copyleft.” But this was on nobody’s radar?

Okay, maybe this exact method of scamming was not envisioned in 1997, but it hardly takes a leap of imagination to see the progress from illegal P2P to legal downloads in tandem with illegal downloads, followed by legal streaming in tandem with illegal streaming. In fact, it doesn’t take any imagination because copyright piracy for profit has been a reality for about 30 years, and there are few systems on the internet that cannot be gamed—especially if the legit platform operator lacks an incentive (i.e., is shielded from liability) to remove scammers. Also, the nature of Bad Dog’s problem did not suddenly appear in late 2023. For instance, I wrote about fake Bob Seger and music tracks owned by Spotify in 2017, which can be seen as a prelude to the kind of scam at work in this instance.

So, if Post is suggesting the DMCA needs overhaul to address workarounds to notice-and-takedown, I welcome him to the cause because professional creators have been shouting into a hurricane for at least 20 years about the near uselessness of the provision as a viable remedy to piracy. Given Post and Blackwell’s day jobs, I do wonder whether they only just discovered the issue the moment it affected their music, but in any case, the DMCA brings me to the other quote from the article that I want to highlight:

To retrieve their songs, Mr. Post and Mr. Blackwell sent out what are called takedown notices, or formal requests to remove pirated music, to a bunch of different sites. The band members used their SoundCloud page to demonstrate that their recordings predated all the uploads on the streaming platforms.

As stated, the DMCA takedown provision is middling at best. Segal reports that Amazon and YouTube removed the pirated tracks quickly, but Apple and Spotify did not. What struck me about the above paragraph, though, is the duo’s use of their SoundCloud page to prove priority and ownership of the work, which is kind of a digital-age version of mailing a copy to oneself—a.k.a. the “poor man’s copyright,” which is meaningless as a mode of legal protection. That brings me to the slightly mercenary point I wanted to make that the musical artist in this same position would find it both easier, and possibly more effective, to send the Copyright Office registration numbers associated with the works that should be removed by DMCA takedown.

One aspect of a registration is that, by operation of law, it is prima facie evidence of ownership. Walk into federal court with those registration certificates, and the burden is on the opposite party to prove that you’re not the owner of the work. In fact, without registration, you can’t walk into a court with an infringement claim, but with regard to a DMCA takedown—especially sent to one of the major platforms—the registration is literally a government seal establishing ownership of the work. It doesn’t guarantee that every platform will expeditiously comply with a takedown request, but it does give them a good reason to do so.

Further (and this is the mercenary part) because I am a passionate advocate for the rights of independent creators, I highlight this incident as the co-founder of a software business called RightsClick. A suite of tools designed to make copyright management easy for the entrepreneurial creator, the app facilitates fast, simple registration that simultaneously builds a database of Titles with their associated registration numbers. Thus, the indie musician in the same position as Mad Dog could look up those numbers in about two minutes and include them in a DMCA notice. Again, not a guarantee of compliance by the platform, but a stronger incentive. Including registration numbers is, after all, what the attorneys prefer to do when they send takedown notices.

I hope readers will forgive the plug for RightsClick in this instance. I generally keep IOM commentary and that venture separate, but this story seemed like a good moment to don both hats. Regardless, the point worth emphasizing is that indie artists should register their work with the Copyright Office. No creator should ever be required to prove they own the work requested for takedown—the provision is already subject to penalties of perjury—but to the extent the platforms stall or play games in this regard, a registration number is a lot better than any other evidence one might otherwise provide.


 

UPDATE/CORRECTION:  Thanks to a representative of Bad Dog, who wrote to tell me that the duo did file a registration application for the album The Jukebox of Regret very soon after discovering the music had been pirated. This same source also states that the music was pirated within one week of publication to SoundCloud, hence the immediate use of that information to show priority and ownership. Based on this information, I wish to correct any implication that Post and Blackwell completely ignored copyright registration, though I would encourage indie artists to register before distributing work to the market. This story proves how quickly your work is likely to be pirated.