Thinking About an Old Copyright Case and Generative AI

old copyright case

The first copyright case decided at the U.S. Supreme Court was Wheaton v. Peters in 1834. There were six justices at the time, including the oft-quoted Joseph Story, and in a 4-2 decision, the Court made what I believe was a textual and, therefore, doctrinal error. The allegedly infringed works at issue were published reports of the Court, and there was neither disagreement nor error in finding that the opinions of the Court themselves were not a subject of protection. Instead, the important question—a philosophical debate inherited from England’s 18th century copyright battles—was whether Article I of the Constitution empowered Congress to create rights or to protect rights that naturally existed at common law.

In finding the former, the Court erred in my view because its opinion turned on misinterpreting the word securing from the intellectual property clause in Article I, which states that Congress is empowered, “To promote the progress of science and the useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” The Court held that securing was a word of “origination,” establishing the doctrinal principle that copyright rights are “creatures of statute.”

The precedent in Wheaton has often been highlighted by anti-copyright scholars because it limits the notion that copyright rights are in any sense natural rights. This, in turn, supports the skeptical (I would say cynical) view that copyright is a devil’s bargain with authors, begrudgingly granting a temporary “monopoly” in exchange for production and distribution of their works. But aside from the fact that the Court of 1834 stated that the longstanding question remained “by no means free from doubt,” its textual interpretation of the word securing was simply unfounded.

As I discuss briefly in my book, there are at least two strong arguments against the Court’s finding that secure was a word of origination, and the first of these is the preamble to the Constitution. When the Framers wrote “to secure the blessings of liberty,” they can only have meant that the aim of the Constitution is to protect, ensure, or maintain that liberty which had so forcefully been articulated in ink and blood as a natural right of all people. The Framers did not mean that the Constitution creates the “blessings of liberty.”

The second argument is the dictionary. Noah Webster, who happens to be both the father of American English and the father of American copyright, was widely respected as a man of letters; as an effective voice for the natural rights of authors; and as the primary force behind the copyright law revision of 1831. Nevertheless, in defining the word securing in the Wheaton case, the Court somehow failed to harmonize its interpretation with any of seven entries in the 1828 edition of Webster’s dictionary. There, all definitions of secure express variations on the idea of “protection,” and none suggests that the word means “creation.”

Why does Wheaton matter today?

By misreading the meaning of secure, the Wheaton Court overstated a utilitarian view of copyright and understated the natural, common law (i.e., human) view of copyright. Granted, this tension dates back a few centuries, if one wishes to look that far, but it isn’t necessary to wander into the tall grass of pre-American history. There is ample rationale since 1790 to hold as self-evident that what the author creates is naturally her property, but this principle can only apply to human creators.

As mentioned, copyright skeptics, many who are either funded by or ideologically aligned with Big Tech, will overstate the precedent that copyright is a “creature of statute” because they like to propose that what Congress giveth, Congress can taketh away. For instance, Wheaton animated the “copyright is broken” campaign, which insists that technological progress in the digital age demands weakening protections on creative works to foster “innovation.”

This argument has taken various forms over the years, including justifying mass piracy; proposing that Congress should roll back the duration of protection; arguing the unconstitutionality of digital rights management; advocating extreme interpretations of fair use; and inventing legal theories like “controlled digital lending” for eBooks. These efforts have largely failed while Big Tech’s credibility has also diminished over the past decade. And indeed, despite the doctrinal weight of Wheaton, the legislative, judicial, and cultural record on copyright is replete with natural rights principles.

Still, although Big Tech does not enjoy the benefit of the doubt it did circa 2012, the commotion over generative artificial intelligence (GAI) reprises the familiar theme that copyright rights allegedly stand in the way of “progress.” In fact, one of the leading astroturf organizations promoting that view calls itself the Chamber of Progress, but the consideration about GAI in the creative community and beyond should respond that “progress” which proposes to displace or diminish human value is not progress.

As new technologies emerge and enter such central aspects of our existence, it must be done responsibly and with respect for the irreplaceable artists, performers, and creatives who have shaped our history and will chart the next chapters of human experience.Human Artistry Campaign

Big Tech surrogates like the Chamber of Progress will repeat the assertion that GAI “democratizes” creativity, which takes a lot of chutzpah coming from an industry that has done so much widespread damage to democracy. By now, it should be obvious that when tech companies claim to “democratize” anything, this smokescreen is disguising the fact that what they are usually doing is undermining the value of individual agency—from control of one’s likeness to copyright rights to political views. In other words, democratization has been bad for democracy.

The Wheaton Court of 1834 could not have imagined that the subject of common law copyright would be relevant 190 years later in context to a technology that can generate creative works without creative people. But human artistry is not strictly about art per se. It reprises the philosophical question as to what it means to be human, and if that answer begins with thought and knowledge, then we must recognize how democracies have been hammered by epistemic crisis since the explosion of social media.

Now that GAI is accelerating and expanding the power of misinformation, the human who encounters the AI generated lie must decide whether to believe what he sees, let alone to amplify the post. This is not merely a question of critical thinking, but an existential test that guys like Peter Thiel hope we fail. As many tech critics have repeated over the last 10-15 years, the design of these technologies—and indeed the stated intent of many of its designers—is that we become its tools rather than the other way around. And GAI has the potential to fulfill that agenda by more thoroughly blurring the line between reality and illusion.

Senate Resolution asks Congress to Promise it will Keep Ignoring Musical Artists

musical artist

A little-known Senate resolution called the Local Radio Freedom Act (LRFA) is a clever move by whoever thought of it. It has no force of law but instead asks Congress to sign a pledge to enshrine an unfair and unfounded policy whereby terrestrial radio broadcasters shall never pay royalties to musical artists. Why? Because that’s how it’s always been.

In copyright law, music generally entails two separately protectable works—the underlying composition and the sound recording. Sound recordings are created by performing artists, and many compositions are naturally recorded by different artists at different times. Quintessential examples include Whitney Houston’s “I Will Always Love You,” and Jeff Buckley’s “Hallelujah,” originally written and performed by Dolly Parton and Leonard Cohen respectively. But if you ever turned up the radio when one of these cover songs came on, you might not know that although Parton and Cohen received royalties, Houston and Buckley did not.

This omission in the royalty scheme has come before Congress many times over many decades, and most Members know the status quo doesn’t make sense. Public performances of musical sound recordings pay artists royalties in every other commercial context, and in every democratic nation in the world, except for American terrestrial broadcast radio. But what is music radio without music?

The answer from the National Association of Broadcasters (NAB), and which is parroted in the LRFA, is that radio “promotes music,” and it does. But that’s only half the story. The other half is that music draws listeners to radio networks, which sell billions of dollars in advertising. Members of Congress know this is the only equitable consideration, yet to watch the last hearing on the issue, one might get the idea that the IP Subcommittee is still at the investigative stage of the decades’ old problem. If Congress seeks an equitable arrangement, it’s in the text of the American Music Fairness Act (AMFA), which was introduced in 2021.

For smaller stations (under $1.5 million/year), the AMFA caps royalties between $10/year to $500/year depending on revenue and status as either a public or private station. For larger stations and networks, rates would be set, as they for the rest of the performance licensing market, by the Copyright Royalty Board (CRB). Under the provisions of AMFA, the CRB must consider station size and revenue when setting rates and must also consider the station’s promotional value to recording artists. It’s hard to imagine how the deal gets more fair than that.

In addition to the half-true “promotion” argument, LRFA also echoes NAB talking points about the many free services radio stations provide to communities—from local news and emergency information to community outreach and charity. The implication is that these services would be curtailed or lost if they had to pay performer royalties, but this claim is neither supported nor well-reasoned. The stations’ good works continue while they pay talk show performers and news reporters—and no doubt, buy coffee and electricity, too.

Notably, when witness Eddie Harrell, Jr., representing the conglomerate Urban One, was asked at the hearing about the CRB, he did not seem to know what it is. This is not to mock Mr. Harrell, but instead to observe that if he was there to claim that his company cannot afford royalties but does not know about the rate-setting court, how does he know what he can’t afford? I think the answer is not that Mr. Harrell is careless or unable to do the homework, but that he anticipates not needing to present those numbers because the NAB has told him to expect that Congress will once again default to the tautological absurdity of “because that’s how it’s always been.”

Members of Congress know it is the large networks and conglomerates lobbying against AMFA and that they are not saying anything new in defense of the status quo. Because this issue has been on and off the table for about eighty years, any reference to further negotiation or study at this point is either a stall tactic or a pocket vote against AMFA. Meanwhile, signing onto LRFA is an explicit statement that, once again, the artists will be ignored right after their representatives tell them how much they are a treasured and respected part of the American tapestry.


Photo by:

The Campaign to Defend Generative AI

generative ai

I have not written steadily about AI and copyright because, frankly, it’s exhausting. Not quite as exhausting as watching the state of the Republic overall, but almost as relentlessly incoherent and repetitive. For instance, Winston Cho for the Hollywood Reporter describes a PR and lobbying campaign by the tech coalition Chamber of Progress to defend the importance of generative AI (GAI). The article quotes founder and CEO Adam Kovacevich thus:  “Gen AI is a net plus for creativity overall. It’s expanding access to creative tools for more and more people and bypassing a lot of the traditional gatekeepers.”

That GAI may yield some beneficial tools for creators is plausible, but the whole “access” and “gatekeepers” rhetoric is a misguided anachronism from a group calling itself the Chamber of Progress. Perhaps “Confederacy of Tech Overlords” was too on the nose, but the generalized argument that GAI represents a “democratic” shift away from gatekeepers, stands on the rubble of experiments that have already failed. I doubt there is a professional creator left who hasn’t figured out that Big Tech’s promise to liberate them from traditional gatekeepers is like a human trafficker promising his next victim a job in a foreign country. Whatever was imperfect about the old models, the new models are more exploitative and hazardous for the average creator.

More precisely, while the alleged “liberation” from older distribution channels might have seemed attractive, GAI is about production, and I am confused as to who the “gatekeepers” would be on the production side of the equation. To the extent, say, Midjourney might enable me to illustrate or paint without any drafting or painting skills, the “gatekeeper” is who exactly? Nature failing to gift me with those skills? Or if we think big, and I can make a whole motion picture without ever turning on a camera, I still fail to see who the “gatekeeper” is in the overreaching promise from the tech industry.

Despite how cutting-edge and “essential” GAI is supposed to be, Big Tech has nothing fresh to say in its advocacy. The theme of “democratization” is the same weather-beaten argument they’ve been flogging for years, one that has proven disastrous for information and the state of real democracy—and which GAI can only make worse. Nevertheless, the Chamber of Progress campaign, as reported by Cho, seeks to promote a sweeping policy that AI developers should be broadly shielded from liability, including copyright infringement claims.

The question of copyright infringement for ingesting works for machine learning (ML) is currently at the heart of several lawsuits. I’ve lost track of them all, but arguably the most solid claim to date is New York Times v. OpenAI et al. because the evidence of copying (i.e., that what went into the model came out of the model) is so compelling. On the other hand, it is worth watching those cases where “reproduction” is less evident and, therefore, where the question may be more thoroughly addressed as to whether ML is a purpose that favors fair use of protected works.

As we have seen in defense of social platforms, Big Tech will spray the blogosphere with the term “fair use,” and copyright antagonists (mainly in academia) will echo the broad claim that of course ML is fair use. Notwithstanding the bugaboo that the fair use doctrine rejects the notion of a general exemption, I would argue that the case law points the other way, including the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. To the limited extent that opinion addresses the ML question at all, its reigning in of the “transformativeness” test is more likely to disfavor the AI developers. Big Tech’s claim is that GAI is broadly “transformative” as a technological accomplishment, but Warhol and other decisions reject such a sweeping interpretation of that aspect of fair use factor one.

Further, as argued in this post, I remain unconvinced that GAI necessarily advances the purpose of copyright to promote new authorship as a matter of doctrine. For instance, if a given work created by GAI cannot be protected by copyright, then the material is, by definition, not a work of “authorship.” As such, this purpose should doom a fair use defense, in my view. Regardless, Big Tech will not be satisfied with the outcomes of any lawsuits, even if the developers win some. What they want is blanket immunity for infringement liability and an affirmation that GAI is truly as important as they say it is. That’s why this paragraph in the Hollywood Reporter story caught my attention:

In comments to the Copyright Office, which has been exploring questions surrounding the intersection of intellectual property and AI, Chamber of Progress argued that Section 230 – Big Tech’s favorite legal shield – should be expanded to immunize AI companies from some infringement claims.

Why highlight that? Because the absence of legal foundation is telling. Not only does Title 47 Section 230 have nothing to do with copyright infringement, but both that law and its copyright cousin, Title 17 Section 512, address the subject of users uploading material to platforms. Neither law says anything about scraping the web to feed material into an AI model for the purpose of ML. Nevertheless, it is clear from reading the actual comments by Chamber of Progress to the Copyright Office that Big Tech recommends policymakers take lessons from both statutes to carve out new liability shields to support the advancement of AI.

Despite the fact that neither §512 nor §230 has proven effective in limiting copyright infringement or dangerously harmful material online, the Chamber of Progress comments reprise Big Tech’s unfounded talking points regarding both statutes. Written by counsel Jess Miers, the comments repeat the false allegation that §512 fosters rampant, erroneous takedowns and also argues that because of §230, “most UGC services go to great lengths to proactively clean-up awful content and provide a safe and trustworthy environment for their users.” Not only will my friends and colleagues fighting Image-Based Sexual Abuse, online hate, and scams be very surprised to learn that, but so will Congress.

One of the scant points of agreement on Capitol Hill these days is that lawmakers have grown weary of liability shields for Big Tech, which has done a poor job of mitigating the worst harms facilitated by their platforms. Section 230 is so ripe for amendment that I’m surprised the Chamber of Progress invoked it, let alone in comments to the Copyright Office which only deals with, y’know, copyright law. More broadly, though, when GAI implies myriad harms beyond copyright infringement, the last thing Congress should do is grant Big Tech more latitude to do whatever it wants in the name of “progress.”  We tried that approach. It sucks.