Chamber of Progress Says Tariffs Are an Excuse to Infringe Copyrights

tariff

Politico reported yesterday that the astroturf organization called Chamber of Progress stated that because Trump’s tariffs will be a “gut punch” to Silicon Valley stock prices, California legislators should decline to aggravate matters by passing a law that would require transparency among AI developers using copyrighted works in model training. Granted, the tone was more circumspect, but that’s what the argument boils down to:  Tariffs are going to screw our stock values, so we need to screw creators to offset the harm.

According to Chamber of Progress economist Kaitlyn Harger, the cost of compliance with AB 412, sponsored by Assembly Member Rebecca Bauer-Kahan, would cause a dip in stock values that “…could carve $381 million out of California’s tax haul from the four tech giants, all key players in the generative AI boom,” Politico reports.

I won’t comment on the numbers, especially because they are speculative, but I will note the amount of SOP fluff being used to package this argument against the transparency bill. Adam Eisgrau, senior director of AI, creativity, and copyright policy at Chamber of Progress states that founding this anti-AB 412 argument in the tariff controversy is “not opportunistic,” when of course it is. He states, “It is fair to call tariffs a tax, and I think it’s fair to call this bill an innovation tax.”

Kudos for dinging tariffs and taxes and promoting innovation in one sentence, but Eisgrau is parroting a longstanding practice of Silicon Valley, calling any price it would pay for necessary materials a “tax” on progress. While compliance with AB 412’s transparency provisions would naturally cost the tech giants something, why is that cost, let alone the effect of tariffs, a basis for ignoring the creators’ whose works are being mined for AI training?

Assuming tariffs will hit every sector and increase prices across multiple supply chains, that universal condition is not a rationale for tech giants getting a supply of copyrighted works for free. The creators who make those works aren’t getting their supplies for free—and most creators barely make a living wage if they’re lucky. Meanwhile, if the California Assembly is looking broadly at the state’s economy in this North v. South narrative, even a cursory review of the numbers shows that motion picture production supports more jobs than the tech giants.

“Bauer-Kahan’s proposal has the backing of Hollywood labor groups,” Politico states, “including the powerful actors’ guild SAG-AFTRA and the National Association of Voice Actors. But it’s been side-eyed by tech industry critics who say it would upend fair-use protections and turn AI training into a lawsuit in waiting.”

This “upend fair use” claim, whether it comes from Eisgrau or any other tech representative, is standard parlor trick of that industry. First, they advocate a broad, generalized application of fair use (a doctrine that defies generalization) and then claim that any counterargument to their position would “upend” some standard that has been established. This is simply false.

AI training with protected works presents a novel set of facts to be weighed in context to fair use case law, and, thus, a finding that training is not fair use would not “upend” precedent. On the other hand, the rhetoric used by Big Tech in this regard asks for a “fair use” application so sweeping that it would be tantamount to a statutory carve-out for all machine learning now or in the future. That is asking to upend fair use.

The consensus appears to be that Trump’s tariff tactics can only sow chaos and drive up the cost of living for all Americans—including, by the way, creators of works protected by copyright. But despite the prospect of universal economic pain, the Chamber of Progress asks California lawmakers to shield a few of the wealthiest corporations on Earth from the rights and financial interests of the creators whose works those companies are exploiting. Wow.


Photo by Beebright

Chamber of Progress: Old Rationales for a Brave New World

old rationales

The Chamber of Progress launched an initiative called the “Generate and Create” campaign to “defend fair use” and “promote AI creativity.” I don’t know whether they bought this campaign used from the basement of Fight for The Future or Electronic Frontier Foundation, but the following statement is worn-out rhetoric that sounds even weaker defending AI as a mode of production than it was defending online platforms as a mode of distribution:

To combat the growing legal and policy copyright threats against generative artificial intelligence, Chamber of Progress announced a new campaign, Generate & Create, highlighting the creative benefits of generative artificial intelligence and supporting established fair use protections for AI training and output.

The pro-creator message is a remix of a remix of Lessig’s “remix culture” argument against online copyright enforcement—a narrative which begat the “we’re all creators” argument against copyright rights. Instead of YouTube enables creators to break free of “gatekeepers,” now it’s AI enabling the same emancipation, though as discussed in this post, it’s hard to fathom who the “gatekeepers” are this time.

Meanwhile, the promise to “defend fair use” is code for “we’re funded by Big Tech to tilt at windmills while we lose legal arguments.” One does not “defend fair use” the way one defends a right to read banned books or a right to reproductive healthcare in the same states that like to ban books. There is no legislative agenda to abolish or amend Section 107 of the Copyright Act. Fair use is a balancing test courts apply in certain copyright infringement cases, and on the headline question as to whether machine learning (ML) with copyrighted works is exempted by fair use, there is nothing “established” about that answer despite CoP’s implication to the contrary.

Although fair use cannot be applied generally (i.e., it is a case-by-case consideration), it is true that all the copyright infringement claims against the various AI developers arise from the same general conduct and, therefore, invite similar or identical fair use defenses. Cutting to the final chapter, if Open AI loses to New York Times and Udio loses to UMG et al. in the Second Circuit, those outcomes are likely to be controlling on the fair use question of ML. Even if any of these cases goes to the Supreme Court, the likelihood of a reversal of an opinion out of the Second Circuit—so prolific on fair use case law—is a bet I wouldn’t make.

Nevertheless, the argument will be presented, and it goes something like this:  Gen AI breeds new creative works, in part by breaking down “barriers” for would-be creators, and because this productivity is consistent with the purpose of copyright, ML serves a transformative purpose and is, therefore, fair use. Notwithstanding the fact that a defendant can win on the transformative question and still lose on fair use overall, I suspect the AI developers may find their very expensive machines described by the courts’ precedent language as “slightly transformative.”

But AI is revolutionary! you might say. How can it be only “slightly transformative?” Answer:  for the same reason the Internet Archive’s Open Library is “slightly transformative”—because its purpose was a substitute for licensed ebooks. What is different about GAI, of course, is that it is generally a substitute as a mode of production more than as a mode of distribution, and to complicate matters, some professional creators are using AI tools and deriving benefits from those uses. So, if that sounds like the answer is “it depends,” welcome to the fact-intensive nature of the fair use defense, which cannot be broadly “defended” in the sense the CoP proclaims.

CoP et al. will promote the argument that because GAI fosters the production of more “creative works,” this predicted increase in output fulfills the purpose of copyright law. But the reason I put “creative works” in quotes is that for every 100 sound recordings to come out of an AI product like Udio, somewhere between an unknown and zero percent of those sounds will be “creative works” as a matter of law. Copyright only protects human authorship of creative expression, and that doctrine will not—and should not—change. Meanwhile, the question as to what the human creator must do in collaboration with GAI for the human to claim copyright in the resulting work is an evolving doctrine—one that is several years, and several lawsuits, away from becoming guidance.

With a product like Udio or Suno, where the business model depends on consumers generating music with a few simple prompts, it is fair to assume that the vast majority of the music produced will not be “creative expression” as a matter of law. And because “creative works” that are not protected by copyright (i.e., are not human authorship) cannot reasonably be held to serve the purpose of copyright, fair use should be foreclosed as a defense of the generative machine.

In response, we will see CoP and defendants argue that because the product is already being used by professional creators, products like Udio or Suno serve both copyright and non-copyright purposes. While plausible, this defense is where I believe the courts may find the GAI’s purpose to be only “slightly transformative.” This is because the dominant purpose—indeed the only ROI available to the developers—is one that primarily does not fulfill the purpose of copyright and which, in fact, serves as a substitute for works that do serve the purpose of copyright.

Further, the consideration of GAI as a tool for creators in furtherance of copyright’s purpose runs headlong into the nascent doctrine as to how and how much use of GAI results in a protectable work. That question is a case-by-case consideration at a granular level. One musician’s use of Udio may produce a copyrightable composition and/or sound recording, while another’s use of the same product in a slightly different manner may have the opposite result. Considering the uncertainty of these hypotheticals to come, it is hard to imagine how the courts could find today that the product at issue favors a finding of transformativeness strong enough to carry the whole fair use analysis.

Chamber of Progress et al. will flood social media with anecdotal arguments, like disabled persons empowered to create thanks to GAI, or the whimsical notion that “machines learn the way people do.” These and other rationales for GAI’s value deserve specific responses, some of which I shall write. But in general, I predict these stories, like Lessig’s “children of YouTube,” will play well with some segment of the blogosphere but then, as legal arguments, will join the pile of similar fair use defenses lying on the floors of the federal courts.


Image source by

The Campaign to Defend Generative AI

generative ai

I have not written steadily about AI and copyright because, frankly, it’s exhausting. Not quite as exhausting as watching the state of the Republic overall, but almost as relentlessly incoherent and repetitive. For instance, Winston Cho for the Hollywood Reporter describes a PR and lobbying campaign by the tech coalition Chamber of Progress to defend the importance of generative AI (GAI). The article quotes founder and CEO Adam Kovacevich thus:  “Gen AI is a net plus for creativity overall. It’s expanding access to creative tools for more and more people and bypassing a lot of the traditional gatekeepers.”

That GAI may yield some beneficial tools for creators is plausible, but the whole “access” and “gatekeepers” rhetoric is a misguided anachronism from a group calling itself the Chamber of Progress. Perhaps “Confederacy of Tech Overlords” was too on the nose, but the generalized argument that GAI represents a “democratic” shift away from gatekeepers, stands on the rubble of experiments that have already failed. I doubt there is a professional creator left who hasn’t figured out that Big Tech’s promise to liberate them from traditional gatekeepers is like a human trafficker promising his next victim a job in a foreign country. Whatever was imperfect about the old models, the new models are more exploitative and hazardous for the average creator.

More precisely, while the alleged “liberation” from older distribution channels might have seemed attractive, GAI is about production, and I am confused as to who the “gatekeepers” would be on the production side of the equation. To the extent, say, Midjourney might enable me to illustrate or paint without any drafting or painting skills, the “gatekeeper” is who exactly? Nature failing to gift me with those skills? Or if we think big, and I can make a whole motion picture without ever turning on a camera, I still fail to see who the “gatekeeper” is in the overreaching promise from the tech industry.

Despite how cutting-edge and “essential” GAI is supposed to be, Big Tech has nothing fresh to say in its advocacy. The theme of “democratization” is the same weather-beaten argument they’ve been flogging for years, one that has proven disastrous for information and the state of real democracy—and which GAI can only make worse. Nevertheless, the Chamber of Progress campaign, as reported by Cho, seeks to promote a sweeping policy that AI developers should be broadly shielded from liability, including copyright infringement claims.

The question of copyright infringement for ingesting works for machine learning (ML) is currently at the heart of several lawsuits. I’ve lost track of them all, but arguably the most solid claim to date is New York Times v. OpenAI et al. because the evidence of copying (i.e., that what went into the model came out of the model) is so compelling. On the other hand, it is worth watching those cases where “reproduction” is less evident and, therefore, where the question may be more thoroughly addressed as to whether ML is a purpose that favors fair use of protected works.

As we have seen in defense of social platforms, Big Tech will spray the blogosphere with the term “fair use,” and copyright antagonists (mainly in academia) will echo the broad claim that of course ML is fair use. Notwithstanding the bugaboo that the fair use doctrine rejects the notion of a general exemption, I would argue that the case law points the other way, including the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. To the limited extent that opinion addresses the ML question at all, its reigning in of the “transformativeness” test is more likely to disfavor the AI developers. Big Tech’s claim is that GAI is broadly “transformative” as a technological accomplishment, but Warhol and other decisions reject such a sweeping interpretation of that aspect of fair use factor one.

Further, as argued in this post, I remain unconvinced that GAI necessarily advances the purpose of copyright to promote new authorship as a matter of doctrine. For instance, if a given work created by GAI cannot be protected by copyright, then the material is, by definition, not a work of “authorship.” As such, this purpose should doom a fair use defense, in my view. Regardless, Big Tech will not be satisfied with the outcomes of any lawsuits, even if the developers win some. What they want is blanket immunity for infringement liability and an affirmation that GAI is truly as important as they say it is. That’s why this paragraph in the Hollywood Reporter story caught my attention:

In comments to the Copyright Office, which has been exploring questions surrounding the intersection of intellectual property and AI, Chamber of Progress argued that Section 230 – Big Tech’s favorite legal shield – should be expanded to immunize AI companies from some infringement claims.

Why highlight that? Because the absence of legal foundation is telling. Not only does Title 47 Section 230 have nothing to do with copyright infringement, but both that law and its copyright cousin, Title 17 Section 512, address the subject of users uploading material to platforms. Neither law says anything about scraping the web to feed material into an AI model for the purpose of ML. Nevertheless, it is clear from reading the actual comments by Chamber of Progress to the Copyright Office that Big Tech recommends policymakers take lessons from both statutes to carve out new liability shields to support the advancement of AI.

Despite the fact that neither §512 nor §230 has proven effective in limiting copyright infringement or dangerously harmful material online, the Chamber of Progress comments reprise Big Tech’s unfounded talking points regarding both statutes. Written by counsel Jess Miers, the comments repeat the false allegation that §512 fosters rampant, erroneous takedowns and also argues that because of §230, “most UGC services go to great lengths to proactively clean-up awful content and provide a safe and trustworthy environment for their users.” Not only will my friends and colleagues fighting Image-Based Sexual Abuse, online hate, and scams be very surprised to learn that, but so will Congress.

One of the scant points of agreement on Capitol Hill these days is that lawmakers have grown weary of liability shields for Big Tech, which has done a poor job of mitigating the worst harms facilitated by their platforms. Section 230 is so ripe for amendment that I’m surprised the Chamber of Progress invoked it, let alone in comments to the Copyright Office which only deals with, y’know, copyright law. More broadly, though, when GAI implies myriad harms beyond copyright infringement, the last thing Congress should do is grant Big Tech more latitude to do whatever it wants in the name of “progress.”  We tried that approach. It sucks.