Chamber of Progress Says Tariffs Are an Excuse to Infringe Copyrights

tariff

Politico reported yesterday that the astroturf organization called Chamber of Progress stated that because Trump’s tariffs will be a “gut punch” to Silicon Valley stock prices, California legislators should decline to aggravate matters by passing a law that would require transparency among AI developers using copyrighted works in model training. Granted, the tone was more circumspect, but that’s what the argument boils down to:  Tariffs are going to screw our stock values, so we need to screw creators to offset the harm.

According to Chamber of Progress economist Kaitlyn Harger, the cost of compliance with AB 412, sponsored by Assembly Member Rebecca Bauer-Kahan, would cause a dip in stock values that “…could carve $381 million out of California’s tax haul from the four tech giants, all key players in the generative AI boom,” Politico reports.

I won’t comment on the numbers, especially because they are speculative, but I will note the amount of SOP fluff being used to package this argument against the transparency bill. Adam Eisgrau, senior director of AI, creativity, and copyright policy at Chamber of Progress states that founding this anti-AB 412 argument in the tariff controversy is “not opportunistic,” when of course it is. He states, “It is fair to call tariffs a tax, and I think it’s fair to call this bill an innovation tax.”

Kudos for dinging tariffs and taxes and promoting innovation in one sentence, but Eisgrau is parroting a longstanding practice of Silicon Valley, calling any price it would pay for necessary materials a “tax” on progress. While compliance with AB 412’s transparency provisions would naturally cost the tech giants something, why is that cost, let alone the effect of tariffs, a basis for ignoring the creators’ whose works are being mined for AI training?

Assuming tariffs will hit every sector and increase prices across multiple supply chains, that universal condition is not a rationale for tech giants getting a supply of copyrighted works for free. The creators who make those works aren’t getting their supplies for free—and most creators barely make a living wage if they’re lucky. Meanwhile, if the California Assembly is looking broadly at the state’s economy in this North v. South narrative, even a cursory review of the numbers shows that motion picture production supports more jobs than the tech giants.

“Bauer-Kahan’s proposal has the backing of Hollywood labor groups,” Politico states, “including the powerful actors’ guild SAG-AFTRA and the National Association of Voice Actors. But it’s been side-eyed by tech industry critics who say it would upend fair-use protections and turn AI training into a lawsuit in waiting.”

This “upend fair use” claim, whether it comes from Eisgrau or any other tech representative, is standard parlor trick of that industry. First, they advocate a broad, generalized application of fair use (a doctrine that defies generalization) and then claim that any counterargument to their position would “upend” some standard that has been established. This is simply false.

AI training with protected works presents a novel set of facts to be weighed in context to fair use case law, and, thus, a finding that training is not fair use would not “upend” precedent. On the other hand, the rhetoric used by Big Tech in this regard asks for a “fair use” application so sweeping that it would be tantamount to a statutory carve-out for all machine learning now or in the future. That is asking to upend fair use.

The consensus appears to be that Trump’s tariff tactics can only sow chaos and drive up the cost of living for all Americans—including, by the way, creators of works protected by copyright. But despite the prospect of universal economic pain, the Chamber of Progress asks California lawmakers to shield a few of the wealthiest corporations on Earth from the rights and financial interests of the creators whose works those companies are exploiting. Wow.


Photo by Beebright

Copyright and AI in a World of Whiplash Public Policy

copyright

I have not added a copyright post here since March 19, when the DC Circuit Court of Appeals affirmed in Thaler v. Perlmutter that works produced autonomously by generative AI (GAI) are not protected under U.S. copyright law. Although it is good to see the human authorship doctrine in copyright left undisturbed, it is a fleeting moment of sanity within a warped national reality.

As reported earlier, Open AI appealed to the administration’s focus on China as a basis to argue that “beating China” requires ignoring the copyright claims of authors whose works are used to train AI models. Not only is that claim wrong on it’s face, but the conduct of the current administration vis-à-vis civil rights forces millions of Americans to ask whether China is an adversary or a role model.

One mirror in the funhouse reveals a compelling bipartisan hearing held by the Senate Judiciary Committee, Subcommittee on Crime and Counterterrorism, where Chairman Hawley and colleagues from both parties offered strong endorsements for the courageous testimony of Facebook whistleblower Sarah Wynn-Williams. Focused primarily on Meta’s engagements with the Chinese Communist Party (CCP)—and Zuckerberg’s lying to Congress about that very issue—the committee cited other abuses described in Wynn-Williams’s book, like the company intentionally targeting vulnerable teens. (More about the book Careless People in another post.)

Ordinarily, I compartmentalize copyright matters from other criticisms of Big Tech, but here, the stories overlap, even if Meta is the only target of the committee’s investigation at this time. First, throughout her testimony, Wynn-Williams repeats the theme that Meta used the “but China will win” argument to oppose Congress taking any meaningful regulatory action. This alone should cast doubt upon Open AI et al. making the same argument as a rationale for mass copyright infringement for model training. As Senator Klobuchar noted, there was no basis for prior claims that enforcing various consumer safeguards (e.g., Kids Online Safety Act) would be counter-productive to national security, and in that light, Congress should decline to believe the same story in regard to copyright infringement.

Meta may be unique—or uniquely situated—as a clandestine partner to the CCP, but it is also notable that the committee mentioned the role of Meta’s Llama AI and heard Wynn-Williams’s testimony that the product was used by the CCP for “AI weapons” and for the development of the Chinese LLM DeepSeek. Further, Wynn-Wiliams offers a theory about the open source versus closed model AI competition in the marketplace. “There’s a lot of money on the line,” she says. “In some ways you could say, if you want open source to prevail, it helps to have a strong threat from a Chinese model so you can say that it’s really important that America wins, and we’re the American open-source option. And I think you can see the way that strategically plays out.”

“But China will win” is pretty much what Open AI told the Office of Science and Technology Policy in its letter arguing that machine training with copyrighted works is per se fair use. But looking at Meta (which is currently being sued in the Kadrey case), consider the perspective:  in developing Llama, not only did Meta scrape the literary works of millions of authors and journalists, and not only did it source pirate libraries for that purpose, but it also deployed that same AI power in the interests of a nation that brutally kills freedom of expression. Yes, of course, I’m thinking the same thing because it’s unavoidable. The current U.S. administration has engaged in multiple First Amendment and other constitutional violations, including assaults on the free press, and thus, the policy whiplash.

Couple these optics with the volume of evidence that the real power behind the destruction of the administrative state is a small group of tech billionaires pushing an anti-democracy ideology called the neo-reactionary movement (NRx), and the idea of advocating creators’ rights seems all but futile. After all, is it remotely sane to think that an administration of semi-literate, 1A-infringing, book banners will care about the rights of authors—let alone reject the tech-bros who wrote the destruction manual for the United States?

Setting aside the copyright questions raised by GAI training, Big Tech’s wanton harvest of artistic and intellectual works as lifeless raw material is perhaps the ultimate expression of the cyberlibertarian’s disdain for human beings as mere repositories of data to be exploited and manipulated. The rhetoric of Big Tech ideology—from 4Chan to the halls of academia—is the authoritarian principle that individuals must be sacrificed for the sake of the collective. All rights are a nuisance to the tech oligarch, and authors are the last people any authoritarian wants to empower.

Open AI’s claim that mass copyright infringement is necessary to “beat China” is paradoxical—either willfully or naively blind to the fact that when we treat works of authorship as mere fodder for the machine, we don’t beat the CCP; we emulate it. Further, not only is the claim overstated that GAI development is a matter of national security, but again, what does “national security” even mean at present? Concepts like American interests, values, innovation, global security, etc. are all diminished, if not wholly swallowed, by the reckless destruction of the principles and institutions that distinguish America as a leader among democratic nations. And copyright rights are in those same crosshairs.

In response to copyright’s critics, especially those in academia with Big Tech funding their work, I have argued that the diversity and scope of America’s creative output has been essential to its strength as a democracy. Whether one looks at the economic value of the core copyright industries, the cultural value of diverse creative expression, or both, the rationale for intellectual property is to incentivize useful innovation and legitimate greatness.

American authors—from historians to rockstars—are the legacy of an aspiration expressed by Noah Webster, the father of American English and of American copyright. In 1783, advocating the first state copyright law in Connecticut, Webster argued that “America must be as independent in literature as she is in politics—as famous for arts as for arms.” By contrast the “greatness” proclaimed by Trump is tautological and brittle just like Big Tech’s claims to “innovation” are often vague and misleading.

As proposed in my book, the inclusion of copyright in Article I was one of the more egalitarian and democratic choices made by the founders, even if they did not wholly grasp its potential. At the most basic level, copyright incentivizes creative expression by any citizen anywhere, and the American model largely fulfilled that traditional Republican principle that the market, not the government, decides what is successful.

The copyright questions presented in roughly 40 cases are difficult and novel. Moreover, the facts presented vary, and thus, the outcomes will vary, especially on questions of fair use. In the meantime, it is clear that at least some of the major AI developers are engaged in a campaign to appeal to the current administration to treat copyright rights much as it is treating other constitutional rights—as principles to trample in a march toward something very un-American.

Too Big to Care: Should Online Platforms Remain Unconditionally Immunized by Section 230?

230

In the current political climate, it is important to clarify that no sensible Section 230 reformer proposes abolishing the statute or endorses threats to revoke the law on the basis of inapt and inaccurate allegations of “content bias.” Section 230 is not a content neutrality law, and statements to the contrary are political theater.


Whether online platforms are too big to care is both a cultural and a legal question. Regarding the latter, “care” refers to a duty of care as applied in common law torts. When I opined on Bluesky that the “Good Samaritan” principle articulated in Section 230 of the Communications Decency Act (CDA) implies that interactive computer service providers (ICSPs) owe a duty of care to those who use their services, this drew a flurry of both civil and not-so-civil critiques about my lack of tort law knowledge. But I am grateful for those exchanges because a crash course in torts offers a practical context for considering §230 reform and that tedious, rhetorical question—What can site operators do to mitigate abuse of their platforms by users?

ICSPs can do a lot. The capacity of the major platforms to micro-target individuals with “information” and alter the course of world events is a matter of record, but even platforms smaller than Meta and Google can accomplish more than they claim while doing less than they could to mitigate harm stemming from well-known abuses of their services. The example I will use in this post, the dating site Grindr, has a market cap of about $3bn, and if it can’t do what I describe below, it’s not because it can’t afford it. More plausibly, it is because the unconditional immunity of §230 does not incentivize good-faith practices as the law intended.

Section 230’s Purpose was to Encourage Harm Reduction

Section 230 was written to address a difficulty first recognized by former Representative Christopher Cox in response to a pair of lawsuits against ICSPs in the mid-90s.[1] The problem Cox noted was that if “editorial control” of user-posted content imputes “publisher liability” to the hosting ICSP, this would disincentivize all platform moderation. With Congress particularly focused on pornography and defamation, Cox and Sen. Ron Wyden drafted §230 as part of the CDA in1996.

The title of §230 is “Protection for private blocking of offensive material,” and the law contains two operative parts under the subtitle “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” The first operative part states that an ICSP is not the “publisher” of communications posted by other parties. The second operative part states that good faith efforts to block or restrict access to “offensive”[2] material shall not result in the ICSP being treated as a “publisher” and thereby impose liability upon the platform for communications posted by other parties.

The words blocking and screening clearly indicate the actions Congress intended to encourage with §230,[3] but in the 20 years since the law was enacted, the courts have interpreted §230 immunity to apply unconditionally—i.e., regardless of whether the platform owner makes a good-faith effort to block or screen harmful communications. As Dr. Mary Anne Franks, in her 2024 testimony before the House Committee on Energy and Commerce, stated:

Put simply, a law cannot incentivize the rendering of aid if that law is interpreted to confer the same benefit upon those who render aid and those who do not. Interpreting Section 230 to shield online intermediaries from liability even when they are indifferent to or benefit from harm actively undermines Good Samaritan behavior and flouts the policy decision made by Congress.

The Tort of Negligence and Property (Premises) Owners

Common law torts, a subject taught to first-year law students (some who never return to it in their careers) address various civil complaints with which we non-attorneys are generally familiar. Based on my crash review of the subject, torts come in three main flavors—intentional harm, negligence, and product defects. Here, I will mainly focus on negligence for considering the contours of ICSP liability on the theory that we should finally stop treating web platforms as undefinable realms “beyond the weary laws of man.”[4]

Because from the perspective of societal interest, ICSPs are private properties that should not be exempted from tort law principles simply because they happen to be virtual properties. This analogy to physical premises was the idea that prompted this post, but it turns out that Kassandra C. Cabrera, a former law student of Dr. Franks’s, wrote a paper in 2021 advocating a premise liability theory for ICSPs. She writes, “Historically, courts were disinclined to impose a duty on landowners to protect against third-party conduct because of difficulty in finding proximate causation. However, the duty under the theory of premises liability extends to landowners when the potential harm from third-party conduct is reasonably foreseeable.”  [emphasis added]

As an example of what is reasonably foreseeable, we are all acquainted with the unremarkable fact that glass containers will inevitably fall off grocery store shelves and create slipping hazards. Under the tort of negligence, the factors that apply to the store owner’s duty of care are as follows:  first, we visitors to the property are called “invitees,” a term of art which means that our presence provides a benefit (typically commercial benefit) to the property owner; and second, our “invitee” status imposes a high duty of care on the property owner, which includes a duty to “inspect” the property and a duty to “make the property safe.” Mere warning of a hazard is not sufficient; expeditious steps must be taken to remedy the hazard.[5]

A property owner may or may not be exempted from his duty of care based on the conduct of invitees. For instance, if a toddler knocks a jar off a shelf, the store owner still has the duty to inspect and make the aisle safe. Alternatively, if a violent customer throws a jar at another customer, the store is unlikely to be liable for the assault, but it still has a duty to act depending on the circumstances and certainly to remedy any hazards resulting from the altercation. Under no circumstance in this context will the owner be able to say, “We’re just a grocery store. How people misuse it is not our concern.” Yet, in general, this is how ICSPs are permitted to operate thanks to §230 immunity.

The duty of care for a grocery store manager does not require special skills or knowledge, and so the owner is held to the “reasonable person” standard of care. Relatedly, the duty to inspect will be venue specific and reasonable. For instance, the owner of a pick-your-own farm has a duty to examine the property from time to time for hazardous fallen branches or debris that visitors might leave behind, but if some lunatic buries a landmine on a path, the farmer cannot be expected to anticipate such an outlandish abuse of his property.

In this regard, the reasonable expectation that an owner can foresee and inspect for certain hazards will vary based on both the nature of the property and the relevant expertise of the owner. A hospital, for instance, is held to a high standard of duty of care due to both the nature of the property and the expertise of those who manage the property. Arguably, an ICSP, which is developed and managed with expertise in computer science, could be held to a standard somewhere between the average “reasonable person” and the medical professional. But this is a question for a court that can only be addressed where a complaint is allowed to proceed past the §230 immunity shield.

Virtual Properties Are Properties for Purposes of Liability

In premises liability, courts take into account factors that could have prevented the injury–for example, the policies in place at the premises and the characteristics of the perpetrator of the crime or intentional tort. Similarly, these factors should be used in the proximate cause analysis in the online context. – Cabrera

Consider the virtual property of the dating/hook-up site Grindr and one invitee’s abuse of the platform that placed a former invitee in grave danger. Specifically, Juan Gutierez created multiple spoof accounts impersonating his ex-boyfriend, Matthew Herrick, and used those spoofs to induce men to violently assault Herrick. “[Gutierez] used the app to arrange sex dates with over a thousand men at [Herrick’s] home and workplace, many of which were under the impression that Matthew wanted to role play rape fantasies,” writes Herrick’s attorney Carrie Goldberg.

Herrick sued and lost on the basis of product liability, but here, it is reasonable to consider the same facts while thinking of Grindr as private property—not because Herrick would have overcome the current interpretation of §230 immunity in his claim—but because premise liability provides a rationale and operational process for the kind of proactive conduct §230 was intended to incentivize. Consequently, it serves as a guide for statutory reform.

The typical PR/legal defense of Grindr (and by extension all ICSPs) is that the site owner is not responsible for the actions of Gutierez any more than the store owner would be for the customer throwing jars or the farmer would be for a psycho planting a landmine. This is fair to a point but also a misdirection because Grindr has both reason and ability to “inspect” its property for the kind of spoof accounts that were weaponized against Herrick. Further, the duty of care should have been elevated once Herrick informed Grinder of the circumstances, but Grindr’s lack of incentive to “inspect” in the first place, and decision not to provide aid in the second, were both supported by its §230 immunity.

What ICSPs Can Do

Any claim that Grindr, or other ICSPs, cannot anticipate commonly known hazards should be viewed skeptically. Spoofing is so common that it is as foreseeable for the platform owner as falling jars are for the grocery store owner. Further, spoofing is just one example of platform abuse for which an owner could “inspect,” especially with advancements in agentic AI. Peter DeMeo, chief product officer of the confidential computing company Phoenix Technologies AG in Switzerland describes a few ways in which AI agents could be used to combat spoofing:

AI agents can detect spoofing accounts on social media by analyzing multiple factors, starting with profile metadata and activity history, then checking for incomplete profiles, generic names, and/or inconsistencies in user information. Suspicious accounts often have recent creation dates and sudden bursts of activity, which can indicate malicious behavior. By tracking IP addresses and device fingerprints, AI can detect multiple accounts operating from the same source, revealing possible coordinated manipulation. AI can also cross-check posts against user bios to spot inconsistencies. By leveraging network analysis, AI can map relationships between accounts to detect suspicious clusters of activity.

DeMeo notes that AI agents can become attack vectors themselves, which is why his company focuses on providing secure environments for the development of agentic AI. But he also notes that anti-spoofing agents can be a “specialized industry,” meaning that imposing a duty of care to “inspect” virtual properties is, like most challenges, a business opportunity.

One argument against the spoofing example from defenders of §230’s status quo might be that Grindr could inspect for 10 spoofs of one Matthew Herrick but could not inspect for any spoofs of one out of a million Joe Smiths. But at least three responses should rebut this and similar generalizations that the volume of data for a given platform is too vast to “police.”

First, DeMeo’s examples (which are not exhaustive) indicate that an AI agent can analyze a lot more than two data points like first and last name. Second, sensible §230 reformers, including Dr. Franks and her colleague Professor Danielle Citron, recommend that immunity be conditioned on “reasonable content moderation practices,” not on 100% perfect results. Third, §230 immunity itself perpetuates the opaque management of ICSPs by dismissing even meritorious complaints before they reach the discovery phase.

Sites as Properties vs. Sites as Products

I focus on negligence in context to private property because it seems to be the most applicable and reasonable way to think about platforms where our visitation provides the essential benefit that makes platforms worth billions. Further, the duties to “inspect” and “make safe” strike me as more generalized (i.e., more likely to serve greater societal interest) than product liability, which can be limited by those voluminous, complicated terms of service nobody reads—and which are subject to change electronically. On the other hand, product liability extends to harm done to parties who are not “invitees,” as indeed Herrick was not a user of Grindr at the time Gutierez abused the site to induce assault.

In Herrick’s case, some of his product liability claims were found to fail on the merits, regardless of §230, but with specific reference to the spoofs used by Gutierez, the opinion of the Second Circuit states:

Herrick alleges that Grindr is defectively designed and manufactured because it lacks safety features to prevent impersonating profiles and other dangerous conduct, and that Grindr wrongfully failed to remove the impersonating profiles created by his ex boyfriend… Those claims are based on information provided by another information content provider and therefore satisfy the second element of § 230 immunity….It follows that the manufacturing and design defect claims seek to hold Grindr liable for its failure to combat or remove offensive third-party content, and are barred by §230.

Defenders of §230 status quo will argue the court gets it right in Herrick, which further supports advocacy for statutory reform based on sensible review of the contemporary internet in contrast to 1996. As Professor Olivier Sylvain writes in his paper Intermediary Design Duties:

Today, the largest online companies do not merely host and relay messages, uninterested in what their users say or do. They use behavioral and content data to engineer online experiences in ways that are unrelated to the charming interest in making connections. Some of the most successful companies, moreover, collect, analyze, sort, and repackage user data for publication in ancillary and secondary markets. This is how the CDA immunity doctrine, first developed by the courts two decades ago, is ill-suited to the world today.

What the opinion in Herrick exemplifies is that no principles of tort liability pass through the §230 shield when “Information [is] provided by another information content provider.” That is the crux of the §230 problem and our dysfunctional relationship with ICSPs in general. We have been conditioned to think of all communications as “information,” or worse, as “protected speech” even when the communication is arguably conduct intended to cause or induce physical, emotional, or economic harm—from doxxing women who speak their minds to exes seeking revenge to inciting political violence, etc.

Indeed, this is why Franks and Citron propose striking the over-broad word information in the statute and replacing it with the legally definable word speech. “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” their proposal states. It is challenging enough for, say, a victim of image based sexual abuse (IBSA) to present a claim against an ICSP at all, but this one word change in the statute would foster a more level playing field between the individual plaintiff and corporate defendant.

Statutory Revision Should Restore the Incentive to Care

If we agree generally that an ICSP should have a duty of care, experts and Congress can decide whether the general obligation is properly construed under product or property liability—or some combination of the two. But whichever path is most reasonable, ICSP owners should no longer be allowed to exploit ambiguous definitions of their platforms to perpetuate the harmful results of unconditional immunity. Further, sound §230 reform must acknowledge the most extreme cases in which ICSP owners deliberately foster or profit from harmful communications and should, therefore, be subjects of intentional tort liability claims.

Some defenders of §230 status quo will argue that suing the Grindrs of the world for damages won’t help solve the problem for the Herricks of the world, but this is unfounded. First, one purpose of tort law is to foster better conduct among especially commercial enterprises. It is an imperfect remedy for various reasons, but if it is wholly ineffective for altering ICSP conduct, why do they fight so hard to maintain the status quo? The answer, I believe, lies in the second, and more important point—that §230 has barred case after case from proceeding to the discovery phase.

Even where a plaintiff may not ultimately prevail and receive damages, discovery in a lawsuit is often how the public learns whether an operation’s practices are fair, safe, lawful, or even consistent with their own warranties. “Even when a plaintiff’s case fails on the merits, judi­cial engagement with the details of her claim helps to frame her suffering as a legible subject of public attention and governance,” writes Professor Douglas A. Kysar. [6]

For example, when whistleblower Frances Haugen testified in 2017 that Meta makes decisions based on profit over safety, the devil’s details underlying that exemplary statement might only be revealed by adjudicating a reasonable claim brought by a plaintiff directly harmed by Meta’s decisions. Instead, Section 230 short-circuits this process, providing unconditioned immunity that is not only uniquely tailored to one industry, but bizarrely applies to the owners of virtual properties where many citizens live half their very real lives.

Despite the length of this post, there are several aspects of Section 230 not discussed, including responses to various arguments for maintaining its status quo and discussion of specific cases that would likely have been adjudicated if not for the shield. Nevertheless, various hearings in Congress have signaled bipartisan fatigue with the status quo, especially where harm to children is involved. Whether that sentiment can be harnessed into reasonable reform is a fair question—especially in the current climate—but as Carrie Goldberg reiterated in one hearing, the Constitution requires that plaintiffs, even if they might lose, deserve their day in court.


[1] Cubby, Inc. v. CompuServe, Inc. and Stratton Oakmont, Inc. v. Prodigy Servs. Co.

[2] The statute states “… to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

[3] “The original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things on the Internet.” NPR citing Cox.

[4] A dig at John Perry Barlow’s Declaration of the Independence of Cyberspace.

[5] A lower standard duty of care is owed to “trespassers” or “licensees” who may visit a property, though Cabrera notes that courts have often collapsed the distinction between “invitees” and “licensees.”

[6] Franks, The Free Speech Industry, pp 70-71.