Big Tech Gets AI Executive Order for Christmas

executive order

Not sure what to get the tech oligarch who has (literally) everything this holiday? Why not his very own Presidential Executive Order titled Ensuring a National Policy Framework for Artificial Intelligence? It’s the latest thing in political theater coming out of the Trump White House—a plan so detrimental in principle to the American public that almost no Members of Congress in either party wanted any piece of it when the plan was proposed as a federal moratorium on state AI regulations. Instead, it looks like David Sacks and Adam Thierer got what they want for Christmas this year.

“This EO reads like a policy paper drafted by Sacks and Thierer in a private room and slid across the Resolute Desk,” states creators’ rights advocate and attorney Chris Castle in a recent blog post. Perhaps the inevitable legal challenges will serve Big Tech’s intent to stonewall compliance with state regulations while they continue to move fast and break more things, or perhaps Congress will act to protect seniors, children, creators, business operators, and pretty much every citizen who may be harmed by unregulated AI.

The stated rationale of the EO proclaims an intent to establish a federal, unified AI policy so that American tech companies can develop their products unburdened by a thicket of various state regulations. Of course, the first problem with both the politics and the operation of the order is that there is no federal AI policy. Thus, the provision that, for instance, the DOJ will establish an AI Litigation Task Force to go after state laws begs the question as to what it can possibly litigate when there are no federal statutes on which to base a complaint. More broadly, the order is ripe for constitutional challenges—Castle discusses five implicated violations—and so, the EO presents yet another opportunity for chaos and lawsuits.

Meanwhile, most states have passed or proposed AI related laws designed to protect citizens from a range of abuses, including scams aimed at seniors and a parade of harmful effects on children and teens. The EO claims to want to address these and other matters, stating…

My Administration must act with the Congress to ensure that there is a minimally burdensome national standard — not 50 discordant State ones.  The resulting framework must forbid State laws that conflict with the policy set forth in this order.  That framework should also ensure that children are protected, censorship is prevented, copyrights are respected, and communities are safeguarded.  A carefully crafted national framework can ensure that the United States wins the AI race, as we must.

But if past is prologue, the tech industry is counting on Congress to do no such thing as adopt a national framework that would resolve any of those concerns, least of all by establishing the one measure the industry has thus far avoided—meaningful liability. I don’t know how many headlines I’ve read, including quotes by lawmakers in both parties, articulating some variation on the theme that “we should not make the mistake with AI that we made with social media.” But that is precisely what the U.S. is doing, and with far more perilous consequences.

In the late 1990s, Congress decided to let the internet service providers operate with little or no oversight because the industry was in its infancy and there was no appetite for “stifling innovation.” Fast forward to the present, and social media platforms are known to be so toxic for young people that Australia is now experimenting with an attempt at an outright ban for users under the age of sixteen.

Beginning in 2017, when journalists and citizens worldwide finally recognized that Silicon Valley is not trustworthy and that social media is not an engine of democracy, Mark Zuckerberg cited AI as the generic solution for myriad ill effects caused by Meta platforms. He was lying, of course. But does anyone of any political persuasion currently believe that AI is not already exacerbating and exceeding the worst aspects of digital life? Because state lawmakers and attorneys general seem clear-eyed on the matter.

A coalition of 42 state AGs led by New Jersey’s Matthew Platkin is demanding that tech companies put an end to harmful chatbot products. In a December 10th announcement on General Platkin’s site, he declares, “As the chief law enforcement officers in our states, we must take action to protect the public from sycophantic and delusional behavior by software that risks breaking a host of criminal and civil laws.”

The premise that America needs an unregulated AI landscape in order to “win” against adversarial China is magical thinking. Because tech giants have no integrity when it comes to self-regulation, it is clear to lawmakers that only the imposition of effective liability will motivate the industry to mitigate unlawful or dangerous design flaws and/or uses of their products. Liability requires regulatory frameworks, and so, the states have done what Congress has thus far failed to do in order to protect American citizens.

Meanwhile, the rhetoric in the EO is consistent with the PR of the industry that insists the public focus on the technology rather than the ethically challenged people behind the technology. Adam Thierer, in a recent post, takes shots at the humanist v. AI perspective, arguing that humanists hypocritically reveal a lack of faith in humans. Of course, he’s right, just not the way he intends, because damn straight millions of us have zero faith in the humans making all the decisions about the development of AI.

We don’t trust the makers of dishwashers to operate without regulations. Why the hell would we give carte blanche to the most arrogant, power hungry, anti-democratic, and greedy boys on Earth playing with a technology that may have existential consequences? That’s not a recipe for winning anything, and we shall she whether the president’s holiday gift to Big Tech leads to anything other than needless litigation when what Americans need are proper safeguards.

 

Rescuing Democracy from Democratization

democratization

Over the weekend, I had the privilege of participating in the 11th annual Mosaic Conference, organized by the Institute for Intellectual Property and Social Justice (IIPSJ) and hosted by Suffolk University Law School IP Center. Founded by Professor Lateef Mtima at Howard University, IPSJ’s mission is to “…examine intellectual property law and policy—as well as the IP regime in total—to see where full participation of disadvantaged, excluded, and marginalized groups may need redressing.”

A number of subjects were raised that will inspire some future blogs, but in the meantime, the following contains my remarks about the folly of “democratization,” slightly edited for this format:

To quote Professor David Golumbia from his posthumously published book, Cyberlibertarianism:  The Right-Wing Politics of Digital Technology, he writes, “As a rule, ‘democratization’ appears to mean tearing apart institutions, regardless of their nominal functions, including institutions whose purpose is to promote or even embody democracy.”

This is a very difficult moment to talk about knitting people and nations together when the exigent forces are so obviously centrifugal. The historian Joseph Ellis uses that word centrifugal in his book The Quartet to describe the sentiments of the newly independent American states and their reluctance to form the union, and it is hard to believe that that era, when roughly 4 million farmers barely knew the world more than 30 miles beyond their homes, might be compared to our digitally and globally interconnected present. But in my view, Big Tech’s claim to want to “democratize” everything, beginning with cultural works protected by copyright, was and remains catalytic to the struggle we now face to rescue the common cause of democracy.

In the United States, as the republican foundations that even allow room for discussions about social justice are under attack, we confront an authoritarianism that we recognize from history paired with a threat of technological feudalism that is unprecedented. At the same time that civil rights hills attained decades ago must now be reclaimed, rapid technological advancements in artificial intelligence also present new potential modes of injustice, and that challenge has many IP implications.

A simple example I have used recently begins with a friend in medical law who predicts that an AI will soon be better at reading a diagnostic scan than a human radiologist. He’s probably right, and of course, such promises, like improved healthcare, animate the political rhetoric used to promote yet another era of laissez-faire tech policy in the name of undefined “innovation.” As Jaron Lanier wrote in 2010, “People will accept ideas presented in technological form that would be abhorrent in any other form.”  I think this captures why the word innovation is allowed to sweep a million sins under a million rugs.

My friend’s medical example begs critical questions about who will own that technology in a winner-take-all market that often stifles competition, and, therefore, whether the tech will improve healthcare for more people or fewer and on what terms. Alternatively, while AI diagnostic tools might improve the quality of care for the few, will AI actuarial tools be used to deny access to the many? Of course, patent law, about which I know very little, will play a substantial role in the many questions implied by the medical example.

But in a copyright context, Silicon Valley, with the help of far too many IP academics, promoted the “democratization” of access to, and use of, cultural works via the allegedly free platforms. This egalitarian rhetoric was so appealing that even many professional creators echoed the sentiment and bought into the promise of working around traditional gatekeepers and forging more “organic” connections with fans. Today, fewer professional creators fare as well as their “pre-democratized” forerunners.

In that PR campaign funded by Silicon Valley, the making available rights and derivative works right in particular were portrayed as anachronistic principles exclusively serving Big Media “landlords” controlling all culture and information. And while I might join certain criticisms of Big Media, especially consolidation of the industry, the “landlord” metaphor was and still is applied even to the independent artist who might presume to enforce her copyright rights.

More broadly, the underlying hypocrisy of this rhetoric is that “landlord,” of all words, is a far more apt description for the owners of virtual real estate, where information does not flow freely but is manipulated by algorithms designed to maximize and monetize even the most toxic forms of engagement. And of course, this includes both rampant copyright infringement and legal uploads of works that have now been harvested for the purpose of training artificial intelligence.

With generative AI, Big Tech—again with the help of many in IP academia—now promotes the alleged value of “democratizing” the production of works, finally revealing democratization as the anti-humanist and, therefore, anti-democratic term that it truly is. We have several current examples in amicus briefs, academic papers, and even one court’s opinion in the Bartz case, in which parties argue that mass production of material by machines somehow fulfills the original purpose of copyright law. For those following Thaler v. Perlmutter, Dr. Thaler’s recent petition for cert at the U.S. Supreme Court argues that the Copyright Office’s affirmation of the human authorship requirement “defies the constitutional goals from which Congress was empowered to create copyright, namely, the creation and dissemination of creative works.”

This is wrongly stated, but the attempt to undermine the human authorship doctrine is, of course, consistent with Big Tech’s ideological view that individual human agency is an outdated nuisance—a bug to program around in pursuit of a grand, tech-utopian dream. Or to put it another way, the scorn for human authorship is in harmony with Mark Zuckerberg recently proclaiming that the future of companionship is one in which we have more robot friends than human ones.

Long after the dust settles on the legality of AI model training with protected works, fundamental questions of social justice in a world with generative AI will need to be addressed. In addition to many examples in which these products are already causing social harm—most acutely adverse psychological effects among children and teens—generative AI can potentially swallow, or perhaps smother, economic opportunities for diversity of expression, perhaps even accelerating the current trend of government censorship.

In that regard, I find it astounding that the copyright skeptics in academia, generally aligned with the political left, promoted democratization by portraying copyright as a tool of censorship rather than as a mode of empowerment for authors. While the free market is not a perfect answer to all challenges, the spike in sales of Art Spiegelman’s Maus after it was banned in 2022, or even the market’s response forcing the restoration of Jimmy Kimmel are, in my view, examples of why the speech right and copyright more often act in concert as a force for democratic principles.

Notably, the IP skeptics have inveighed against strong copyright rights by arguing social justice principles, as if, for instance, the right of access without copyright’s boundaries is the moral equivalent of the right to read campaign now confronting real censorship. Moreover, social justice for the artist is often omitted by that school’s overstating a purely utilitarian foundation for copyright. Not only is that perspective belied by history, but it seems to me that for an IP regime to encompass social justice values, some natural rights principles must apply.

In fact, in this light, I think it is noteworthy that rather than pursue a federal publicity right in response to AI’S potential to replicate anyone’s likeness, the NO FAKES Act currently before the U.S. Congress borrows principles from trademark, copyright, and right of publicity to create a novel IP right in one’s voice and likeness. Perhaps this moves the U.S. one step closer to some of the moral rights principles that animate copyright law in other countries.

It is no surprise that the tech industry so aggressively attacked intellectual property rights by selling the chimera of “democratization.” IP rights, at their best, foster an expansive and diverse world of competing ideas, whereas Big Tech’s interests—and the interests of authoritarians—are best served by organizing people into bunkers of competing realities. This epistemic crisis, I firmly believe, explains the wanton destruction of so many democratic institutions. And with generative AI, of course, it is easy to see how mass automation of synthetic material, posing as creative and informative works, is likely to exacerbate this problem.

Democratization is a beguiling term that no longer describes movement toward democratic forms. It exploits the language of democracy to mask an ideological contempt for democratic institutions and individual agency. It is a centrifugal force driving people, communities, and nations apart—a path to social, economic, and political anarchy, where bullies win and justice does not exist. Consequently, I would ask those in IP academia to be vigilant about the distinction between democratization and democracy and to push back on the rhetoric of the former in the hope that we can still rescue the latter.

Finding Fair Use for GAI Training is Highly Problematic

fair use

Although I have expressed aspects of these views in several posts over the past couple of years, I will try to consolidate my opinion as to why GAI training with protected creative works is a more problematic fair use consideration than many, even the courts, seem to believe. I acknowledge that even fellow copyright advocates will disagree with some of this analysis, but here it goes:

For the sake of narrowing the focus to the question of whether training generative AI (GAI) with protected works favors a fair use exception, the following assumes that the training requires unlicensed copying of protected expression. Further, even if the GAI maker limits the product’s capacity to output infringing copies, this does not alter the fact that considering fair use for this purpose is, at best, troubling and, at worst, so disturbing to case law that the AI developers are begging the courts to articulate doctrine out of whole cloth.

A GAI’s Purpose is Not Analogous to Past Fair Use Factor One Findings

The courts have largely rejected the overbroad opinion that making “something new” is a sufficient justification for unlicensed use of protected works. Thus, it is difficult to see where any court finds an authority to support the argument that making a “creator robot,” however revolutionary its developers proclaim it to be, is a transformative purpose under a factor one analysis.

Typically, a GAI’s purpose neither expresses “critical bearing” on the works used (AWF v. Warhol) nor provides information about the works to human readers (Authors Guild v. Google) nor fosters interoperability in computer devices (Google v. Oracle). Instead, a GAI’s most widely applied and widely promoted purpose is artificial “authorship” without authors—a purpose which forecasts myriad negative effects that may prove to dramatically overwhelm any benefits promised by the developers.

Naturally, certain GAIs (e.g., ChatGPT) can be used for various purposes, about which more below, but if the courts are distracted by the sheer novelty, scope, and hype around the “importance” of GAI and, therefore, presume transformativeness, they may be persuaded to articulate a rationale that would be tantamount to a blanket exception for GAI training. If the court adopts this carve-out in the context of fair use factor one, the result would be a reversal of its own reluctance to favor the broad “something new” argument for transformativeness so recently rejected in Warhol.

Notably, it is not unprecedented for the court to articulate rationales beyond the four-factor analysis. In the Google Books case, the court found that the search tool provides a “social benefit,” and a similar sentiment was articulated in Google v. Oracle regarding consumer benefit in advancing mobile products. Or looking back at the Betamax VCR case, the concept of “time shifting” the viewing schedule served the public interest by expanding flexibility in the consumption of copyrighted material that was lawfully obtained.

But if the courts look for a rationale beyond the case law (e.g., a clear social benefit of GAI), not only will they be making a wild guess, but any conclusion in favor of the developers will probably be wrong—perhaps dangerously so. While it is understandable that the courts may be reluctant to hobble technological development in principle, the available facts militate against disturbing fair use jurisprudence for the sake of nurturing GAI in general.

Put differently, if the courts are going to take a wait-and-see approach, there is ample evidence that GAIs already cause harm to individuals—from CSAM and defamation to cheating and psychological issues—to say nothing of the well-founded anxieties—social, political, economic, and environmental—associated with this multi-trillion-dollar gamble being played by the same people who unrepentantly accrued wealth and power from the darkest results of Web 2.0.

GAI as a Tool for Creators

To the extent that a given GAI product may be considered a tool for producing creative works, a fair use holding should at least find that the tool “promotes the progress” of authorship with respect to copyright’s purpose. But this is difficult because the same GAI in the hands of one skilled creator offers little insight about its ultimate purpose in the hands of 100-million unskilled users.

At the positive end of considering GAI’s purpose, my friend David Bolinsky, a medical illustrator and animator, recently made a series of 8 dozen topically and stylistically distinct ten second animations, introducing speakers and segment topics for a scientific conference, a daunting assignment. GAI collapsed well over a year of work (if using his standard 3D animation tools) into a matter of weeks. He was surprised at the breadth and depth of creative latitude GAI enabled. Further, he explained that although these presentations allowed more creativity than his typical discrete medical and scientific educational animations, an amateur lacking his experience still could not have used the same GAI tools to achieve the same results. Consequently, Bolinsky sees GAI as an opportunity to do more and different kinds of work and not as a threat to his creativity or livelihood.

In this example, the technology is socially beneficial and arguably “promoting the progress” of authorship, which may favor a finding that the tool is transformative. That said, due to the human authorship requirement, we are years away from guidance as to the degree of copyright protection on those animations; and if GAI tools are used to produce millions of works that have no “authors” as a matter of law, it is contrary to find that this “promotes progress” in regard to copyright’s purpose.

Further, the difficulty for the court in considering fair use is that Bolinsky and his colleagues who specialize in medical work are unique among professional creators, to say nothing of the many millions of non-creator customers that GAI developers need—because they are leveraged into the stratosphere—to make their products profitable. This scale implies an analysis reminiscent of Sony—i.e., a question of whether the purpose of the GAI is substantially beneficial or substantially harmful. But knowing that requires time travel.

If a court could see a few years into the future and find, for instance, that the GAI at issue will be used substantially for nonconsensual pornography, disinformation, and scams, it would presumably decline to find these purposes are social benefits that favor an expansive transformativeness finding. Instead, at the moment, the courts simply have no idea what the true “purposes” are of various GAIs, which is unprecedented in fair use jurisprudence. The VTR, Google Books, Android phones, et al. did not serve materially different purposes years after they were presented to the courts in their respective cases. By contrast, GAIs present an incomplete and dynamic set of facts; and in my view, this alone should militate against finding that factor one favors any of these products.

The Threat to Authorship Itself

As stated in other posts and in comments to the Copyright Office, one unique challenge of GAI is that it poses a potential threat to authorship (i.e., that it will shrink the number of creative workers), which is clearly destructive to the progress clause and copyright law. Although my own view is that a party who poses an existential threat to copyright’s purpose should not be allowed to invoke one of copyright law’s affirmative defenses, I recognize the difficulty in that opinion.

Under U.S. law, copyright protects authors indirectly by protecting certain exclusive rights to use their works. Consequently, there is little foundation for arguing generalized harm to authorship itself, despite the overwhelming recognition that diversity in authorship has benefitted the United States both culturally and economically for almost two centuries. In this context, GAI provokes the question as to whether U.S. policy might shift toward a “moral rights” approach akin to Europe, but that’s a discussion for a different post.

Instead, the general threat to authorship is considered, to an extent, under fair use factor four, which weighs the potential threat to the market value of the works used. The key difficulty, however, is that if the GAI does not output the song “Ordinary” but instead outputs music in the style of Alex Warren, then the output is not, strictly speaking, a threat to the market value of “Ordinary” itself. While proposals like the NO FAKES Act would prohibit unauthorized replication of Warren’s voice, copyright law does not clearly prevent a GAI that makes Warren-like music that could theoretically obviate the need for Warren himself.[1]

For now, several plaintiffs in the roughly 40 active lawsuits GAI developers have presented evidence of outputs that are substantially similar to the works used in training, and this should disfavor fair use for the GAI developers under factor four. More broadly, plaintiffs in these cases argue that licensing works for the purpose of AI training is itself a market opportunity exclusive to the copyright owner, and therefore, the failure to license constitutes market harm under factor four.

Some courts may be reluctant to agree with the lost licensing opportunity claim, but that reluctance is unfounded—even if a developer successfully prevents its product from outputting copies of works used in training. So long as one of the exclusive copyright rights is implicated (and here, it would be the reproduction right), then a requirement to license exists. Consequently, failure to license, especially at such an extraordinary scale for unprecedented commercial venture, is unquestionably market harm to the copyright owner.

Even where there may be a close call on factor four, because the GAI developer should lose on factor one, and because factors two and three decidedly favor creator plaintiffs, factor four should not reasonably control in many of these cases. Moreover, the courts should pay scant attention to the claim by developers that the cost of licensing is existentially prohibitive to the development of GAI. In addition to the fact that this plea is barely tolerable from parties wildly spending billions on high-risk ventures, any claim that a license is “too costly” for any venture is no defense under copyright law. The copyright owner sets the terms for the use of her work, and the prospective user can accept those terms or not before using the work. If that rule applies to the bootstrapping indie filmmaker, surely it applies to Microsoft, Meta, Google, et al.

Conclusion

Fair use is a mixed question of fact and law, and I maintain that what should be most fatal to the developers’ fair use defense is that, like the public, the courts have insufficient facts about the ultimate purpose of GAI products. Just as with Web 2.0 in the late 1990s, we are witnessing unfounded political sentiment to once again let Big Tech do what it wants, preaching to the public that this time, the technology really will “solve the world’s problems.”

Of course, there is no rational basis for that belief beyond the self-interest of the developers and the investors losing billions every year. If past is prologue, Congress would live to regret the folly of allowing AI to run amok, just as Members of both parties now rue the unconditioned immunity of Section 230. In the meantime, while licensing copyrighted works for GAI training will not address all, or most, of the potential hazards of artificial intelligence, the courts should decline to adopt strained fair use rationales in the name of assumed progress that may turn out to be a complete disaster.


[1] I believe there are cultural reasons that militate against this result, but those predictions do not influence the fair use consideration.