“Innovation” Doesn’t Mean Anything

"innovation"

Two headlines in the first week of this month said a lot about the United States as an “innovative” nation right now. One story announced that the first driverless semi-trucks are on the highway covering normal long-haul routes, and the second reported that the final shipments of pre-tariff goods from China were arriving at U.S. ports. Leave it to contemporary America to dispatch a new fleet of robot trucks just in time for the cargo containers to be empty. On the other hand, I guess it works out in principle because the unemployed truck driver won’t have money to buy the goods that won’t be on the shelves.

According to the tech-utopians about a decade ago, the displaced truck driver shouldn’t worry because he now lives in a world of abundance and can, at last, spend his days painting or writing poetry or making music with all the leisure time he now enjoys. Isn’t that what happened? Didn’t technology “innovate” that Keynesian promise of a social and economic golden age? Doesn’t look like it. In fact, we’ve even got machines to write poetry and make music, so the ex truck driver will just have to pound sand.

Big Tech historically calls everything it does “innovation,” allowing scant room for critique of a product’s pros and cons while labeling any policy that might protect some injured parties “anti-innovation.” Even where harmful results are identified and become the subjects of congressional hearings, the product makers effectively sell these “unintended” hazarrds as a price that must be paid for “more innovation.” And, by the way, that promised “age of abundance” will start any day now, if we are just patient and keep feeding the beast more data.

The Coalition for a Safer Web can describe in grim detail how social media and other tech platforms have “innovated” teen suicide, scams, and drug trafficking. Or the recent proliferation of AI “companion” apps (virtual girlfriends and boyfriends) has “innovated” new concerns among child psychologists—and these apps may also “innovate” new vectors for malware attacks. And, of course, increasingly realistic AI deepfakes may further “innovate” our fleeting grasp on reality, which has been essential to “innovating” American democracy to the edge of extinction.

Sporting the word “innovation” as a cloak for all manner of sins, the tech industry contends that the materials used to build the next generation of AI products (i.e., the works of artists and creators) are so essential for even more “innovation” that copyright rights must be disregarded. Elon Musk and Jack Dorsey even opined that the U.S. should simply abandon intellectual property rights altogether, and the industry rhetoric appealing to the current administration claims that copyrights must not hamper the national interest in “winning” the competition to build the “best” AI.

The folly of declaring an intent to “win the AI war” without defining what success looks like is consistent with U.S. tech policy for decades and with policy affecting all sectors, public and private, today. To call Trump 2.0 incoherent is too kind, as that term can imply well-meaning error when, in fact, the administration is engaged in a purposeful, multi-pronged attack on science and the arts in direct conflict with the intent of the progress clause of the Constitution.

Article I, Section 8, Clause 8, giving Congress the power to “promote science and the useful arts” by establishing copyright and patent laws was an expression of the Framers hope that the fledgling, agrarian nation might one day create great cultural works and inventions. But of course, IP law alone can’t do that. Quite simply, without the I, you ain’t got no P—and I is under assault in the United States. Brain-drain and chaos are now the hallmarks of every federal department from healthcare to defense, and in the private sector, Trump’s goons attack universities, the motion picture industry, publishers, authors, journalists, and scientists—literally anyone smarter than they are, which includes a lot of damn people.

“Innovation,” Copyright, and AI Training

Big Tech argues that all AI training with protected works should be exempted from infringement claims by the doctrine of fair use. Ordinarily, broad claims about fair use remain in the blogosphere while specific legal questions are weighed in court. But in regard to AI training, I worry that the general perception of the technology as “innovative” may result in overbroad application of “transformativeness” under factor one, which considers the purpose of a use.

For instance, Judge Chhabria, in last week’s hearing in Kadrey et al. v. Meta, stated that Meta’s Llama is “highly transformative,” which may signal an overbroad reading that synonymizes “transformative” with “innovative” while also eliding a thorough weighing of the extensive purposes for which the use is made. Or in a nutshell, how can a court fully consider the purpose of a use when the technology at issue is dynamic and open-ended?

As noted in an earlier post, landmark fair use cases have involved technologies that were complete models as facts presented to the courts—e.g., the VCR and the Google Books search tool. The court did not need to wonder, for instance, whether the purpose of Google Books—i.e., to provide information about books—might also be used to build an AI “psychologist” that may harm patients seeking mental healthcare. In fact, as The Guardian reports on this very issue, Mark Zuckerberg advocates “innovating” psychotherapy with AI “providers,” thus adding doctor next to historian, journalist, and constitutional scholar to the list of qualifications he lacks as he proceeds to break all things.

In this context, and with the recognition that Meta’s commercial interests entail application of its AI tools across many, if not all, initiatives in the company, what exactly is the purpose of Llama as weighed in a factor one fair use consideration? I’m not convinced the court can really know.

Beyond the Four Factors

When Congress codified fair use in the 1976 Act, it sought to convey over a century of judge-made law as statutory guidance, but beyond the four-factor test, “courts may take other considerations into account,” writes Professor Jane Ginsburg in a paper about AI and fair use. Indeed, she cites to the Google Books case, in which the court states, “the use provides a significant benefit to the public.” But with a product like Llama, where a court has reason to predict substantial crossover between socially beneficial and socially toxic purposes, how can a judge reasonably decide whether the purpose is “highly transformative” when the facts themselves are so ephemeral?

It is one matter for a court to consider the “transformativeness” of an AI built for a clearly defined purpose as presented, but it seems another matter if the technology has myriad purposes, including ones that will manifest after a case has been resolved. Whether Midjourney’s purpose to enable the production of visual works makes fair use of visual works in its training may be a sufficiently narrow consideration, but by contrast, an LLM developed by Meta is arguably open-ended development for purposes as yet undefined.

After all, Meta began with a college student ranking sorority girls and is now a trillion-dollar company that has altered the course of human history—and many of its “innovations” have had destructive results. In this light, the courts should decline to find “transformativeness” in the same overbroad spirit in which the tech industry wields the term “innovation.” Because without a clear definition and coherent law and policy, “innovation” is how we end up with a truck with no driver carrying a load of nothing to nobody.


Photo by Snoopydog1955

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)