Big Tech Tells Trump Admin that Copyright is a Barrier to AI Development

copyright

Last week, in response to the Executive Order referred to as the “AI Action Plan,” various stakeholders submitted comments to the Office of Science and Technology Policy (OSTP). OpenAI, for its part, submitted one of the finest examples of tech-bro bombast we have seen in some time. Not even Google’s comments, which names copyright, privacy, and patents as barriers to AI development, comes close to OpenAI for serving up so much high-octane, tech-utopian gibberish, including this gem in the preamble:

As our CEO Sam Altman has written, we are at the doorstep of the next leap in prosperity: the Intelligence Age. But we must ensure that people have freedom of intelligence, by which we mean the freedom to access and benefit from AGI, protected from both autocratic powers that would take people’s freedoms away, and layers of laws and bureaucracy that would prevent our realizing them.

Fewer than half of all Americans trust either the current administration or Big Tech when it comes to “freedoms” or “intelligence,” but does anyone believe that AI development inexorably leads to the kind of prosperity OpenAI projects in its comments? Like most technologies, AI can be used for good or evil. In theory, it can be used to diagnose and treat disease, but in practice, it could be used to “solve” disease by more efficiently automating denial of treatment. It can be used to enhance or improve productive work, but it might be used to shed jobs across multiple sectors without considering the implications of doing so.

“Innovation” is a meaningless word until it is defined by the values and principles of the innovators and/or the government with which the industry partners. In OpenAI’s effort to distinguish American AI development from that of the People’s Republic of China (PRC), it recommends, at least in its comments on copyright, that we should emulate the anti-democratic, piratical conduct of this adversary. It even goes so far as to allege without foundation that machine learning (ML) with unlicensed copyrighted works is a matter of national security.

Under the heading “Freedom to Learn,” OpenAI’s comments about copyright—especially the emphasis on fair use doctrine—are incoherent to the point that one wonders whom the company is addressing. But before speculating about that question, here are a few quotes with responses:

American copyright law, including the longstanding fair use doctrine, protects the transformative uses of existing works, ensuring that innovators have a balanced and predictable framework for experimentation and entrepreneurship.

The judge-made fair use doctrine applies a four-factor test, of which one part of the first factor considers whether a “transformative use” has been made of a protected work. There is no direct precedent applicable to mass copying of creative works for the purpose of ML to build artificial intelligence, which is why about thirty active lawsuits present this novel question to the courts. Further, because fair use is a case-by-case, affirmative defense to a claim of infringement, it defies the “predictable frameworks,” for which OpenAI claims to be asking.

This approach has underpinned American success through earlier phases of technological progress and is even more critical to continued American leadership on AI in the wake of recent events in the PRC.

This says, “American innovation is great, but the Chinese kicked our asses with DeepSeek, and we’re grumpy about it.” Kudos to OpenAI for playing to the audience, but it is incoherent as a statement about the fair use defense “underpinning American success.” The core copyright industries account for an estimated 7.66% of U.S. GDP and this proven prosperity should not be radically disturbed for the sake of undefined “innovation,” some of which will inevitably flop.

As for history, American copyright law has typically adapted to technological change by ensuring the protection of authors’ rights from the exigencies of technology developers. In the best cases, this fosters a symbiotic relationship between new technology and creators, but that is not what OpenAI advocates here. Instead, OpenAI says, “American creators be damned. AI is too important to worry about their rights.”

OpenAI’s models are trained to not replicate works for consumption by the public. Instead, they learn from the works and extract patterns, linguistic structures, and contextual insights. This means our AI model training aligns with the core objectives of copyright and the fair use doctrine, using existing works to create something wholly new and different without eroding the commercial value of those existing works.

This attempt to litigate questions of fact and law in comments to the OSTP is as contradictory as it is misplaced. First, it asserts that OpenAI’s ML process does not violate any copyright rights and is, therefore, non-infringing. But that assertion conflicts with the inapt argument that their ML is exempted under factors one and four of the fair use test. Where there is no basis for a claim of infringement, there is no rationale for arguing a fair use defense.

Applying the fair use doctrine to AI is not only a matter of American competitiveness—it’s a matter of national security. Given concerted state support for critical industries and infrastructure projects, there’s little doubt that the PRC’s AI developers will enjoy unfettered access to data—including copyrighted data—that will improve their models. If the PRC’s developers have unfettered access to data and American companies are left without fair use access, the race for AI is effectively over.

Here, OpenAI argues that American policy should emulate the PRC by disregarding the rights of creators, thereby, disqualifying any claim by Altman & Co. to promote democratic values. Further, OpenAI not only invents the term “fair use access” but then erroneously implies that U.S. national security operations need the “freedom to learn” from unlicensed creative works in order to do their jobs.

For Whose Eyes?

The combination of misstatements and emphasis on fair use prompts the question as to what policy OpenAI hopes to achieve. If OpenAI et al. want a statutory exception for ML, any rational petition to Congress for that change to the Copyright Act would not address fair use or suggest amendment to that part of the statute. Instead, we must assume that this message is aimed at the courts, who will decide whether and to what extent ML is exempted by fair use, including in cases where OpenAI is a defendant.

Presumably, one hope is to say the words “national security” enough times that 1) some party in the administration echoes this talking point; and/or b) the courts feel reluctant to rule against AI developers on copyright infringement claims. In either case, AI is not one product. Development of security-related products or AI agents for the intelligence community does not rely upon the development of those generative AI models that are built substantially on ingesting millions of creative works without license for the purpose of producing artificial “creative” works.

More broadly, it is a tad rich to say that copyright rights are a barrier in the AI arms race while DOGE is assigned to hack its way through educational funding and shed experts in nearly every field. If America loses to China in this contest, it will most likely be attributable to our national retreat from excellence and fostering a culture where people refuse to see the difference between a Ford F-150 and a plastic piece of shit. If that’s the kind of public/private environment in which Americans are going to develop AI, don’t blame the artists and their copyright rights when it fails.


Photo by pylypchukinnastock358

Are Creators Aligned on Artificial Intelligence?

creators

One of many challenges with adoption of generative AI (GAI) tools is whether creators are willing to demonstrate a degree of solidarity on the matter—i.e., apply the principle we generally call fair trade. If Creator A uses a GAI that might be harmful to Creator B in a different field, and so on, will most creators take this broader perspective in a group effort to demand ethical uses of GAI?  Moreover, this question becomes intertwined with copyright because the use of GAI is a subject of evolving legal doctrine, meaning that creators who want to produce commercial content outside their core talents should be aware that the material produced may not be protectable under the law.

Two simple examples would be the self-published book author who might use an AI voice app to produce an audiobook, and the documentary filmmaker who might use an AI music generator to produce a soundtrack for a film. In both examples, creators in other fields—voice actors and composers respectively—are potentially harmed by the development and use of these AI tools, but 1) will the author and filmmaker take that consideration into account?; and 2) will the sound recordings in either case be protected by copyright?

In the case of the author using AI in lieu of hiring a narrator to produce the audiobook, I predict that under current doctrine, the sound recording would not be protected by copyright law because there is no human performance captured in that recording. Thus, remedies for any piracy of the audiobook would rely solely on the protection of the underlying literary work, which is effective—but if the sound recording is also protected and registered, that would be two works infringed instead of one.

This increases the potential damages for infringement, which puts the author/owner in a stronger position if she needs to take legal action. By this example, authors’ interests may be seen as aligned with those of professional book narrators. Hiring a narrator will not only achieve better quality in the reading, but capturing the human performance is also a basis for copyright attaching to the sound recording.

Similar considerations would apply to the filmmaker with the GAI soundtrack, although there may be other factors that provide the AI music with some protection we don’t find with the AI audiobook. One factor that may become relevant is whether the filmmaker can show that he exerted sufficient creative control over the final sounds. If so, he may be able to defend a claim of copyright in the soundtrack, but we are likely several years and a few lawsuits away from clear guidance on this question.

Another consideration with the soundtrack may be the Copyright Office’s current view that material using assistive AI “within a larger work” is protected. Creators should be careful about interpreting that broad language because constituent works that stand alone—and this would apply to a soundtrack for a film—would logically not be independently protected.

Of course, there are many GAI products that allow one type of creator to avoid hiring another type of creator for a given project. Some of this is inevitable, and it is not necessarily unethical or bad for creative culture. That said, even with ethically trained and ethically used AI tools, the copyright considerations should be weighed by the individual creator (i.e., do they care about protecting what might not be protectable?), but also collectively by all creators contributing to a new ecosystem.

Since 1978 in the U.S., the default is automatic copyright protection, even if most rights are never enforced. But as GAI is used to produce a lot of material that is not protected, it is hard to predict what effect this might have on copyright overall. Even older than automatic copyright with the 1976 Act, the human authorship principle fosters a new tension for creators who may wish to combine GAI and human-authored work. As a response to that tension, it would be a mistake in my view to overwrite the “human spark” doctrine and simply protect any material that “walks and talks” like a creative work. This isn’t just an emotional appeal to anthropocentrism but rather a conviction that copyright would become meaningless—even unconstitutional—by eroding the incentive rationale for its existence.

Regardless of the theoretical questions addressed in this post, I believe that as a practical matter, creators should think carefully about how and when to use GAI for various projects. As an ethical consideration, perhaps if you’re opposed to “scraping” in your industry, then opposing it in others is the right view to take. But as a business consideration, if what you’re making is meant to have commercial value, AI-generated might mean not protected by copyright—and that means even if you spend money and time on it, it isn’t yours.

The Copyright War Was Never Just About Copyright

copyright war

The so-called “copyright war” began years before I joined the fight, arguably in 1999, when defenders of the P2P platform Napster equated music piracy with liberty. Thus, rather than a rational discussion about the interdependence of creators and technology, Big Tech cultivated a syncretic foundation from which to sell the paradox that devaluing individual rights was somehow good for democracy. After all, it was easy to promote the view that copyright only mattered to wealthy rockstars and giant corporations while eliding the subtle but significant fact that it is a constitutional right of every citizen.

The “copyright war” peaked in the public eye in 2011/12 when the tech-funded campaign to defeat the anti-piracy bills SOPA and PIPA raged across the social platforms, convincing even professional creators that free speech and the “internet as we know it” were doomed if those bills became law. Outlandish lies about the perils of that legislation—mostly crafted and promoted by left-leaning organizations, journalists, and academics—masked the cyberlibertarian philosophy of tech’s most influential figures. Because for many leading tech-bros, it was never just about copyright, but rather, their barely disguised contempt for rights in general and that outdated political model we call the Republic. And what better way to hide an anti-democratic agenda than in plain sight with populist slogans like democratization?

Timothy Snyder, in a recent article making the point that destruction of the American state is the only agenda of tech oligarchs, writes: “The logic of ‘move fast and break things,’ like the logic of all coups, is to gain quick dramatic successes that deter and demoralize and create the impression of inevitability. Nothing is inevitable.” Citing Facebook’s old motto in context to events in Washington since the start of Trump 2.0 is spot on. Yes, for many tech companies the assault on rights like copyrights was purely about siphoning wealth from creators, but for the biggest egos in the room—like the guys on stage at the inauguration—it’s about usurping power.

When I first jumped into the copyright fight in 2011, I was lectured by Mike Masnick and others that I simply didn’t understand the economic concept of creative destruction. My friends and I were accused of “clinging to buggy whips in a world of automobiles,” failing to see how new technologies opened up new opportunities for artists, even as they closed “outdated” modes of production and distribution. But in the same way those messages distorted the math—omitting evidence of destruction without creation—the “digital rights” crowd ignored or endorsed the fact that what they were really promoting was the an ideological, anti-democratic agenda.

The idea that an independent photographer or songwriter might dare to remedy widespread, unlicensed uses of their works online was not just financially anathema to platform owners’ interests, it was philosophically repugnant that the “unthinking demos,” to quote Peter Thiel, should have any say whatsoever about the geniuses building our new utopia. This is the real spirit behind the “don’t stifle innovation” talking point—promoting that alleged inevitability cited by Snyder, now being used to push unfocused development of artificial intelligence as a mandate without public oversight.

From a broad perspective, the ideological assault on copyright was a powerful framework for teaching citizens to disregard the rights and dignity of other citizens through the anonymizing medium of digital technology—a “concealing paint emancipating us into savagery,” to borrow from William Goldman. Getting permission to use a creator’s work was scorned by the same rationales that Redditors applied to sharing stolen nudes of celebrities or which tech-evangelists still use to justify all manner of toxic content under the general view that it’s all just speech.

Parallel to Big Tech’s attack on the very idea of permission and respect for individual rights, the major platforms arrogated the role of oversight—and everyone fell for the trick. It was illusory oversight, of course, but both message and perception were that Facebook, Twitter, et al. offered better transparency (i.e., more truth) than any journalist or civil servant ever could. As more news and politics populated social media, the word “sunlight” was often repeated as a talismanic code, which meant that no government agent, no journalist, no expert could be trusted because the “real truth” lies somewhere in the morass of alternate realities hosted on the web.

The assault on copyright was often described by my friends and colleagues as a “canary in a coal mine” because it was easy for most observers to compartmentalize the “war” as a minor skirmish that, at worst, might deprive already rich people of a few bucks. In truth, it was one battle in a broader war that has now manifest in lawless, incompetent, and violent individuals mucking about in the federal government—including one non-American tech oligarch getting his mitts all over our public affairs without any oversight. As shocking as these events may be, they’re not surprising. Big Tech said, “Disrupt everything.” They weren’t kidding.