Big Tech Tells Trump Admin that Copyright is a Barrier to AI Development

copyright

Last week, in response to the Executive Order referred to as the “AI Action Plan,” various stakeholders submitted comments to the Office of Science and Technology Policy (OSTP). OpenAI, for its part, submitted one of the finest examples of tech-bro bombast we have seen in some time. Not even Google’s comments, which names copyright, privacy, and patents as barriers to AI development, comes close to OpenAI for serving up so much high-octane, tech-utopian gibberish, including this gem in the preamble:

As our CEO Sam Altman has written, we are at the doorstep of the next leap in prosperity: the Intelligence Age. But we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.

Fewer than half of all Americans trust either the current administration or Big Tech when it comes to “freedoms” or “intelligence,” but does anyone believe that AI development inexorably leads to the kind of prosperity OpenAI projects in its comments? Like most technologies, AI can be used for good or evil. In theory, it can be used to diagnose and treat disease, but in practice, it could be used to “solve” disease by more efficiently automating denial of treatment. It can be used to enhance or improve productive work, but it might be used to shed jobs across multiple sectors without considering the implications of doing so.

“Innovation” is a meaningless word until it is defined by the values and principles of the innovators and/or the government with which the industry partners. In OpenAI’s effort to distinguish American AI development from that of the People’s Republic of China (PRC), it recommends, at least in its comments on copyright, that we should emulate the anti-democratic, piratical conduct of this adversary. It even goes so far as to allege without foundation that machine learning (ML) with unlicensed copyrighted works is a matter of national security.

Under the heading “Freedom to Learn,” OpenAI’s comments about copyright—especially the emphasis on fair use doctrine—are incoherent to the point that one wonders whom the company is addressing. But before speculating about that question, here are a few quotes with responses:

American copyright law, including the longstanding fair use doctrine, protects the transformative uses of existing works, ensuring that innovators have a balanced and predictable framework for experimentation and entrepreneurship.

The judge-made fair use doctrine applies a four-factor test, of which one part of the first factor considers whether a “transformative use” has been made of a protected work. There is no direct precedent applicable to mass copying of creative works for the purpose of ML to build artificial intelligence, which is why about thirty active lawsuits present this novel question to the courts. Further, because fair use is a case-by-case, affirmative defense to a claim of infringement, it defies the “predictable frameworks,” for which OpenAI claims to be asking.

This approach has underpinned American success through earlier phases of technological progress and is even more critical to continued American leadership on AI in the wake of recent events in the PRC.

This says, “American innovation is great, but the Chinese kicked our asses with DeepSeek, and we’re grumpy about it.” Kudos to OpenAI for playing to the audience, but it is incoherent as a statement about the fair use defense “underpinning American success.” The core copyright industries account for an estimated 7.66% of U.S. GDP and this proven prosperity should not be radically disturbed for the sake of undefined “innovation,” some of which will inevitably flop.

As for history, American copyright law has typically adapted to technological change by ensuring the protection of authors’ rights from the exigencies of technology developers. In the best cases, this fosters a symbiotic relationship between new technology and creators, but that is not what OpenAI advocates here. Instead, OpenAI says, “American creators be damned. AI is too important to worry about their rights.”

OpenAI’s models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights. This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works.

This attempt to litigate questions of fact and law in comments to the OSTP is as contradictory as it is misplaced. First, it asserts that OpenAI’s ML process does not violate any copyright rights and is, therefore, non-infringing. But that assertion conflicts with the inapt argument that their ML is exempted under factors one and four of the fair use test. Where there is no basis for a claim of infringement, there is no rationale for arguing a fair use defense.

Applying the fair use doctrine to AI is not only a matter of American competitiveness—it’s a matter of national security. Given concerted state support for critical industries and infrastructure projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to data—including copyrighted data—that will improve their models. If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.

Here, OpenAI argues that American policy should emulate the PRC by disregarding the rights of creators, thereby, disqualifying any claim by Altman & Co. to promote democratic values. Further, OpenAI not only invents the term “fair use access” but then erroneously implies that U.S. national security operations need the “freedom to learn” from unlicensed creative works in order to do their jobs.

For Whose Eyes?

The combination of misstatements and emphasis on fair use prompts the question as to what policy OpenAI hopes to achieve. If OpenAI et al. want a statutory exception for ML, any rational petition to Congress for that change to the Copyright Act would not address fair use or suggest amendment to that part of the statute. Instead, we must assume that this message is aimed at the courts, who will decide whether and to what extent ML is exempted by fair use, including in cases where OpenAI is a defendant.

Presumably, one hope is to say the words “national security” enough times that 1) some party in the administration echoes this talking point; and/or b) the courts feel reluctant to rule against AI developers on copyright infringement claims. In either case, AI is not one product. Development of security-related products or AI agents for the intelligence community does not rely upon the development of those generative AI models that are built substantially on ingesting millions of creative works without license for the purpose of producing artificial “creative” works.

More broadly, it is a tad rich to say that copyright rights are a barrier in the AI arms race while DOGE is assigned to hack its way through educational funding and shed experts in nearly every field. If America loses to China in this contest, it will most likely be attributable to our national retreat from excellence and fostering a culture where people refuse to see the difference between a Ford F-150 and a plastic piece of shit. If that’s the kind of public/private environment in which Americans are going to develop AI, don’t blame the artists and their copyright rights when it fails.


Photo by pylypchukinnastock358

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)