No FAKES Act Matched in House Bill to Address Gen AI Replication

no fakes

On Monday, beloved actor James Earl Jones passed away at age 93, but in 2022, he signed an agreement with LucasFilms to allow the voice of Darth Vader to live on through Gen AI replication. Jones’s permission to replicate his voice is a bittersweet prelude to today’s news from Capitol Hill, where the House of Representatives introduced its own No FAKES Act to prohibit the unlicensed replication of any person’s likeness or voice. Sponsored by Reps. Salazar, Dean, Moran, Morelle, and Wittman, the House bill is identical to the Senate No FAKES Act introduced in late July and, so, demonstrates a bicameral, as well as bipartisan, sense of urgency to address misuse of Gen AI for this purpose.

To recap, No FAKES establishes a new property right in the likeness of any person and prohibits unauthorized replication of a likeness, which includes voice. Historically, likeness has only been protected on a limited basis by a patchwork of state Right of Publicity (ROP) laws, typically prohibiting unauthorized use of a celebrity likeness for commercial/advertising purposes. But the unprecedented capability of Gen AI to be used by anyone to replicate the likeness of anyone—and which will exacerbate the reality-bending world of online “information”—has prompted Congress to move swiftly and, in my view, creatively.

It was July 2023 when the idea of a federal ROP law was discussed during a hearing held by the House Judiciary Committee Subcommittee on Intellectual Property. At the time, I imagined this was a prelude to years of haggling on Capitol Hill while Gen AI developers proceeded at internet speed to wreak havoc with tools to produce more advanced “deepfakes.” Instead, the introduction of No FAKES in the Senate just one year later—and now, the same bill in the House less than two months after that—reveals both seriousness and deftness in legislators’ zeal to confront the issue. Rather than approach the matter as one to be remedied by a federal ROP law, Congress, with input from various stakeholders, has responded to the novelty of the challenge with novel legislation, drawing upon principles found in ROP, trademark, and copyright law.

If passed, No FAKES would operate akin to ROP, but it automatically applies to every citizen, and unlawful replication is not limited to commercial/advertising purposes. At the same time, because many misuses of Gen AI replication have both reputational and commercial implications, No FAKES shares a kinship with trademark, which is a creature of the Commerce clause. And finally, the new right is copyright-like as a property right which vests in the individual, may be licensed for various uses, and is descendible to heirs and assigns with certain limits and conditions unique to protecting likeness.

Opposition Is Familiar but the Battlefield Is Different

Many of the usual suspects representing Big Tech, including the newly formed (I can’t believe they called it this) Chamber of Progress, will likely raise constitutional challenges to No FAKES, leaning hard into the refrain that the new likeness right will chill protected speech. As to the merits of that argument, the text of the bill already includes well-crafted, First Amendment-based exceptions; and as a PR message, I believe Big Tech is refreshingly at a disadvantage. Concerns over abuse of Gen AI encompass a broad range of Americans—from professional creators to parents seeing how easily children can be sexually exploited—and in general, people just aren’t buying Big Tech’s “make life better” rhetoric anymore.

Examples of legitimate innovation (e.g., Jones permitting Darth Vader to continue, or Randy Travis overcoming physical voice loss) will entail permission of the person whose likeness or voice is being replicated. Yet, in response to the many harms which may be caused by unlicensed Gen AI replication, AI defenders will promote the overbroad refrain that “innovation” must be allowed to flourish — but of course, “innovation” is Big Tech’s euphemism for “profitability at any cost.” Congress is still playing catch-up to address myriad harms fostered by pre-AI social media and is, therefore, reluctant to repeat the mistakes of the late 1990s by allowing Gen AI “room to grow” without restrictions.

Interestingly, Chamber of Progress appears designed to frame the multi-billion-dollar AI gamble as socially and politically “progressive,” a strategy belied by its advocating broad liability shields for AI developers akin to Section 230 of the CDA and Section 512 of the DMCA. In fact, that view aligns perfectly with Open AI CEO Sam Altman suggesting that it is impossible to develop without free use of copyrighted works, or with investor Marc Andreesen writing a smug and erroneous manifesto as a plea for continued laissez-faire policy in all things tech. If there is anything “progressive” about Gen AI, Chamber of Progress will need to produce more than worn out rhetoric to prove it.

We’ve been here and done this, but No FAKES is a bill with a lot of political momentum. The likelihood that many citizens will oppose a prohibition on the unlicensed use of their own, or their children’s, likenesses seems low to the point of futility. We’ll see what comes, but by my lights, No FAKES is destined to become law.


Image by: nikolay100

Thinking About an Old Copyright Case and Generative AI

old copyright case

The first copyright case decided at the U.S. Supreme Court was Wheaton v. Peters in 1834. There were six justices at the time, including the oft-quoted Joseph Story, and in a 4-2 decision, the Court made what I believe was a textual and, therefore, doctrinal error. The allegedly infringed works at issue were published reports of the Court, and there was neither disagreement nor error in finding that the opinions of the Court themselves were not a subject of protection. Instead, the important question—a philosophical debate inherited from England’s 18th century copyright battles—was whether Article I of the Constitution empowered Congress to create rights or to protect rights that naturally existed at common law.

In finding the former, the Court erred in my view because its opinion turned on misinterpreting the word securing from the intellectual property clause in Article I, which states that Congress is empowered, “To promote the progress of science and the useful arts, by securing for limited times to authors and inventors the exclusive right to their respective writings and discoveries.” The Court held that securing was a word of “origination,” establishing the doctrinal principle that copyright rights are “creatures of statute.”

The precedent in Wheaton has often been highlighted by anti-copyright scholars because it limits the notion that copyright rights are in any sense natural rights. This, in turn, supports the skeptical (I would say cynical) view that copyright is a devil’s bargain with authors, begrudgingly granting a temporary “monopoly” in exchange for production and distribution of their works. But aside from the fact that the Court of 1834 stated that the longstanding question remained “by no means free from doubt,” its textual interpretation of the word securing was simply unfounded.

As I discuss briefly in my book, there are at least two strong arguments against the Court’s finding that secure was a word of origination, and the first of these is the preamble to the Constitution. When the Framers wrote “to secure the blessings of liberty,” they can only have meant that the aim of the Constitution is to protect, ensure, or maintain that liberty which had so forcefully been articulated in ink and blood as a natural right of all people. The Framers did not mean that the Constitution creates the “blessings of liberty.”

The second argument is the dictionary. Noah Webster, who happens to be both the father of American English and the father of American copyright, was widely respected as a man of letters; as an effective voice for the natural rights of authors; and as the primary force behind the copyright law revision of 1831. Nevertheless, in defining the word securing in the Wheaton case, the Court somehow failed to harmonize its interpretation with any of seven entries in the 1828 edition of Webster’s dictionary. There, all definitions of secure express variations on the idea of “protection,” and none suggests that the word means “creation.”

Why does Wheaton matter today?

By misreading the meaning of secure, the Wheaton Court overstated a utilitarian view of copyright and understated the natural, common law (i.e., human) view of copyright. Granted, this tension dates back a few centuries, if one wishes to look that far, but it isn’t necessary to wander into the tall grass of pre-American history. There is ample rationale since 1790 to hold as self-evident that what the author creates is naturally her property, but this principle can only apply to human creators.

As mentioned, copyright skeptics, many who are either funded by or ideologically aligned with Big Tech, will overstate the precedent that copyright is a “creature of statute” because they like to propose that what Congress giveth, Congress can taketh away. For instance, Wheaton animated the “copyright is broken” campaign, which insists that technological progress in the digital age demands weakening protections on creative works to foster “innovation.”

This argument has taken various forms over the years, including justifying mass piracy; proposing that Congress should roll back the duration of protection; arguing the unconstitutionality of digital rights management; advocating extreme interpretations of fair use; and inventing legal theories like “controlled digital lending” for eBooks. These efforts have largely failed while Big Tech’s credibility has also diminished over the past decade. And indeed, despite the doctrinal weight of Wheaton, the legislative, judicial, and cultural record on copyright is replete with natural rights principles.

Still, although Big Tech does not enjoy the benefit of the doubt it did circa 2012, the commotion over generative artificial intelligence (GAI) reprises the familiar theme that copyright rights allegedly stand in the way of “progress.” In fact, one of the leading astroturf organizations promoting that view calls itself the Chamber of Progress, but the consideration about GAI in the creative community and beyond should respond that “progress” which proposes to displace or diminish human value is not progress.

As new technologies emerge and enter such central aspects of our existence, it must be done responsibly and with respect for the irreplaceable artists, performers, and creatives who have shaped our history and will chart the next chapters of human experience.Human Artistry Campaign

Big Tech surrogates like the Chamber of Progress will repeat the assertion that GAI “democratizes” creativity, which takes a lot of chutzpah coming from an industry that has done so much widespread damage to democracy. By now, it should be obvious that when tech companies claim to “democratize” anything, this smokescreen is disguising the fact that what they are usually doing is undermining the value of individual agency—from control of one’s likeness to copyright rights to political views. In other words, democratization has been bad for democracy.

The Wheaton Court of 1834 could not have imagined that the subject of common law copyright would be relevant 190 years later in context to a technology that can generate creative works without creative people. But human artistry is not strictly about art per se. It reprises the philosophical question as to what it means to be human, and if that answer begins with thought and knowledge, then we must recognize how democracies have been hammered by epistemic crisis since the explosion of social media.

Now that GAI is accelerating and expanding the power of misinformation, the human who encounters the AI generated lie must decide whether to believe what he sees, let alone to amplify the post. This is not merely a question of critical thinking, but an existential test that guys like Peter Thiel hope we fail. As many tech critics have repeated over the last 10-15 years, the design of these technologies—and indeed the stated intent of many of its designers—is that we become its tools rather than the other way around. And GAI has the potential to fulfill that agenda by more thoroughly blurring the line between reality and illusion.

The Future Was Then: AI Moving Us Backwards on Carbon Emissions

Coal-fired power plant. Cost of data centers.

As the Super Bowl approached and passed, it seemed that one faction of Americans was accusing Taylor Swift of practicing witchcraft on the NFL while another was slagging her for the carbon output of her private jet—reportedly about 8,300 tonnes of CO2e in 2022. And although it is fair to expect owners of private aircraft to fly responsibly, I must ask this:  What is the environmental value of not shitposting about Taylor Swift? Or for that matter, any number of topics?

The carbon cost of a single tweet is ~.026g; the cost of X (nee Twitter) is estimated at 8,200 tonnes per year; and the overall carbon cost of social media is estimated at 262 million tonnes of CO2e per year. So, if we use this social media carbon calculator, it tells us that 1 million people spending just 2 minutes a day on the 10 major social sites costs just over 8,300 tonnes of CO2e per year—roughly the same amount T Swift reportedly generated with her airplane in 2022.

C

I recognize that this is comparing the carbon footprint of one individual to a million individuals, but that one individual entertains millions and generates economic activity. By contrast, the social posts of a million people at any given moment are only making pollution in every sense. Clearly, it costs metric tons of carbon to produce metric tons of useless noise. And that preamble brings us to the topic of the projected increase in electricity demand for data centers to support advancements in artificial intelligence (AI). As Bloomberg reported in late January:

Electricity consumption at US data centers alone is poised to triple from 2022 levels, to as much as 390 terawatt hours by the end of the decade, according to Boston Consulting Group. That’s equal to about 7.5% of the nation’s projected electricity demand. 

In past posts about generative AI, I have opined that we do not need machines to make creative works—because we don’t—and that AI should be tasked with solving problems like curing disease or mitigating the climate crisis. On the second point, however, it seems that if an AI were asked the climate question, its only rational answer would be, “Shut me down.” If nothing else, AI could be an environmental catastrophe in the making.

“In the Kansas City area, a data center along with a factory for electric-vehicle batteries that are under construction will need so much energy the local provider put off plans to close a coal-fired power plant,” the Bloomberg article states. Because that quote cites both electric vehicles (EVs) and the data center, one must acknowledge that the environmental analysis of EVs entails a projection of carbon saved against carbon spent. But because a data center is pure carbon expenditure, that cost can only be measured against the value of the activity the center supports.

No question that data centers are infrastructure. There is no enterprise—private or public—that does not rely on networked computing, and economic activity almost always presents an environmental challenge, whether one is building a railroad or an eCommerce platform. But considering even the current energy demand, let alone the projected increase, AI pulls the issue into focus because so many of its applications are already either useless or toxic.

Useless, as stated, is the AI that generates “creative” work in lieu of the human creator, while toxic would be something like more advanced deepfakes exacerbating the disinformation crisis. Regarding the former, this flips the economic equation—i.e., carbon cost yielding lost jobs, which is arguably the opposite of economic activity. Regarding the latter, the use of AI to expand and deepen disinformation campaigns represents carbon cost in exchange for “better tools” that have already been used to weaken democracy worldwide.

In 2013, I wrote a post called Show Me the Innovation—one of many responses to the generalized argument that legal frameworks designed to protect intellectual property, privacy, information integrity, and even personal safety all stand in the way of “innovation.” The point then, as now, is that not everything produced by Big Tech is “innovative,” if we insist that word mean something. If “innovation” should improve lives and foster prosperity, isn’t it curious that social media’s carbon cost helps support anti-science agendas like climate change denial?

In a recent post about the environmental cost of data centers, Chris Castle cites Science Daily, noting that “generative AI like ChatGPT could cost 564 megawatt-hours (MWh) of electricity a day to run.” That’s more than some small countries. When coupled with the fact that data center demand is halting planned shutdowns of coal-fired plants, then it starts to look a lot like AI is helping to “innovate” the U.S. backwards, reversing the gains made over the past twenty years in carbon emissions.

Traditionally, it is possible to do a cost/benefit analysis. We burn x amount of coal to power y number of homes, or we need x amount of oil to run y amount of ground transportation. And even in the earliest days of electrification or automobiles, the benefits were self-evident. But with rapid advancements in AI, the cost is rising without clear evidence of benefit—at least not at the scale the electricity demand implies. This is because, like so many “innovations” of Big Tech, AI might be used to accomplish something extraordinary like improving medical diagnoses, but in the meantime, it will be used make what is already bad about digital life suck faster.


Photo by: dropthepress