Too Big to Care: Should Online Platforms Remain Unconditionally Immunized by Section 230?

230

In the current political climate, it is important to clarify that no sensible Section 230 reformer proposes abolishing the statute or endorses threats to revoke the law on the basis of inapt and inaccurate allegations of “content bias.” Section 230 is not a content neutrality law, and statements to the contrary are political theater.


Whether online platforms are too big to care is both a cultural and a legal question. Regarding the latter, “care” refers to a duty of care as applied in common law torts. When I opined on Bluesky that the “Good Samaritan” principle articulated in Section 230 of the Communications Decency Act (CDA) implies that interactive computer service providers (ICSPs) owe a duty of care to those who use their services, this drew a flurry of both civil and not-so-civil critiques about my lack of tort law knowledge. But I am grateful for those exchanges because a crash course in torts offers a practical context for considering §230 reform and that tedious, rhetorical question—What can site operators do to mitigate abuse of their platforms by users?

ICSPs can do a lot. The capacity of the major platforms to micro-target individuals with “information” and alter the course of world events is a matter of record, but even platforms smaller than Meta and Google can accomplish more than they claim while doing less than they could to mitigate harm stemming from well-known abuses of their services. The example I will use in this post, the dating site Grindr, has a market cap of about $3bn, and if it can’t do what I describe below, it’s not because it can’t afford it. More plausibly, it is because the unconditional immunity of §230 does not incentivize good-faith practices as the law intended.

Section 230’s Purpose was to Encourage Harm Reduction

Section 230 was written to address a difficulty first recognized by former Representative Christopher Cox in response to a pair of lawsuits against ICSPs in the mid-90s.[1] The problem Cox noted was that if “editorial control” of user-posted content imputes “publisher liability” to the hosting ICSP, this would disincentivize all platform moderation. With Congress particularly focused on pornography and defamation, Cox and Sen. Ron Wyden drafted §230 as part of the CDA in1996.

The title of §230 is “Protection for private blocking of offensive material,” and the law contains two operative parts under the subtitle “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” The first operative part states that an ICSP is not the “publisher” of communications posted by other parties. The second operative part states that good faith efforts to block or restrict access to “offensive”[2] material shall not result in the ICSP being treated as a “publisher” and thereby impose liability upon the platform for communications posted by other parties.

The words blocking and screening clearly indicate the actions Congress intended to encourage with §230,[3] but in the 20 years since the law was enacted, the courts have interpreted §230 immunity to apply unconditionally—i.e., regardless of whether the platform owner makes a good-faith effort to block or screen harmful communications. As Dr. Mary Anne Franks, in her 2024 testimony before the House Committee on Energy and Commerce, stated:

Put simply, a law cannot incentivize the rendering of aid if that law is interpreted to confer the same benefit upon those who render aid and those who do not. Interpreting Section 230 to shield online intermediaries from liability even when they are indifferent to or benefit from harm actively undermines Good Samaritan behavior and flouts the policy decision made by Congress.

The Tort of Negligence and Property (Premises) Owners

Common law torts, a subject taught to first-year law students (some who never return to it in their careers) address various civil complaints with which we non-attorneys are generally familiar. Based on my crash review of the subject, torts come in three main flavors—intentional harm, negligence, and product defects. Here, I will mainly focus on negligence for considering the contours of ICSP liability on the theory that we should finally stop treating web platforms as undefinable realms “beyond the weary laws of man.”[4]

Because from the perspective of societal interest, ICSPs are private properties that should not be exempted from tort law principles simply because they happen to be virtual properties. This analogy to physical premises was the idea that prompted this post, but it turns out that Kassandra C. Cabrera, a former law student of Dr. Franks’s, wrote a paper in 2021 advocating a premise liability theory for ICSPs. She writes, “Historically, courts were disinclined to impose a duty on landowners to protect against third-party conduct because of difficulty in finding proximate causation. However, the duty under the theory of premises liability extends to landowners when the potential harm from third-party conduct is reasonably foreseeable.”  [emphasis added]

As an example of what is reasonably foreseeable, we are all acquainted with the unremarkable fact that glass containers will inevitably fall off grocery store shelves and create slipping hazards. Under the tort of negligence, the factors that apply to the store owner’s duty of care are as follows:  first, we visitors to the property are called “invitees,” a term of art which means that our presence provides a benefit (typically commercial benefit) to the property owner; and second, our “invitee” status imposes a high duty of care on the property owner, which includes a duty to “inspect” the property and a duty to “make the property safe.” Mere warning of a hazard is not sufficient; expeditious steps must be taken to remedy the hazard.[5]

A property owner may or may not be exempted from his duty of care based on the conduct of invitees. For instance, if a toddler knocks a jar off a shelf, the store owner still has the duty to inspect and make the aisle safe. Alternatively, if a violent customer throws a jar at another customer, the store is unlikely to be liable for the assault, but it still has a duty to act depending on the circumstances and certainly to remedy any hazards resulting from the altercation. Under no circumstance in this context will the owner be able to say, “We’re just a grocery store. How people misuse it is not our concern.” Yet, in general, this is how ICSPs are permitted to operate thanks to §230 immunity.

The duty of care for a grocery store manager does not require special skills or knowledge, and so the owner is held to the “reasonable person” standard of care. Relatedly, the duty to inspect will be venue specific and reasonable. For instance, the owner of a pick-your-own farm has a duty to examine the property from time to time for hazardous fallen branches or debris that visitors might leave behind, but if some lunatic buries a landmine on a path, the farmer cannot be expected to anticipate such an outlandish abuse of his property.

In this regard, the reasonable expectation that an owner can foresee and inspect for certain hazards will vary based on both the nature of the property and the relevant expertise of the owner. A hospital, for instance, is held to a high standard of duty of care due to both the nature of the property and the expertise of those who manage the property. Arguably, an ICSP, which is developed and managed with expertise in computer science, could be held to a standard somewhere between the average “reasonable person” and the medical professional. But this is a question for a court that can only be addressed where a complaint is allowed to proceed past the §230 immunity shield.

Virtual Properties Are Properties for Purposes of Liability

In premises liability, courts take into account factors that could have prevented the injury–for example, the policies in place at the premises and the characteristics of the perpetrator of the crime or intentional tort. Similarly, these factors should be used in the proximate cause analysis in the online context. – Cabrera

Consider the virtual property of the dating/hook-up site Grindr and one invitee’s abuse of the platform that placed a former invitee in grave danger. Specifically, Juan Gutierez created multiple spoof accounts impersonating his ex-boyfriend, Matthew Herrick, and used those spoofs to induce men to violently assault Herrick. “[Gutierez] used the app to arrange sex dates with over a thousand men at [Herrick’s] home and workplace, many of which were under the impression that Matthew wanted to role play rape fantasies,” writes Herrick’s attorney Carrie Goldberg.

Herrick sued and lost on the basis of product liability, but here, it is reasonable to consider the same facts while thinking of Grindr as private property—not because Herrick would have overcome the current interpretation of §230 immunity in his claim—but because premise liability provides a rationale and operational process for the kind of proactive conduct §230 was intended to incentivize. Consequently, it serves as a guide for statutory reform.

The typical PR/legal defense of Grindr (and by extension all ICSPs) is that the site owner is not responsible for the actions of Gutierez any more than the store owner would be for the customer throwing jars or the farmer would be for a psycho planting a landmine. This is fair to a point but also a misdirection because Grindr has both reason and ability to “inspect” its property for the kind of spoof accounts that were weaponized against Herrick. Further, the duty of care should have been elevated once Herrick informed Grinder of the circumstances, but Grindr’s lack of incentive to “inspect” in the first place, and decision not to provide aid in the second, were both supported by its §230 immunity.

What ICSPs Can Do

Any claim that Grindr, or other ICSPs, cannot anticipate commonly known hazards should be viewed skeptically. Spoofing is so common that it is as foreseeable for the platform owner as falling jars are for the grocery store owner. Further, spoofing is just one example of platform abuse for which an owner could “inspect,” especially with advancements in agentic AI. Peter DeMeo, chief product officer of the confidential computing company Phoenix Technologies AG in Switzerland describes a few ways in which AI agents could be used to combat spoofing:

AI agents can detect spoofing accounts on social media by analyzing multiple factors, starting with profile metadata and activity history, then checking for incomplete profiles, generic names, and/or inconsistencies in user information. Suspicious accounts often have recent creation dates and sudden bursts of activity, which can indicate malicious behavior. By tracking IP addresses and device fingerprints, AI can detect multiple accounts operating from the same source, revealing possible coordinated manipulation. AI can also cross-check posts against user bios to spot inconsistencies. By leveraging network analysis, AI can map relationships between accounts to detect suspicious clusters of activity.

DeMeo notes that AI agents can become attack vectors themselves, which is why his company focuses on providing secure environments for the development of agentic AI. But he also notes that anti-spoofing agents can be a “specialized industry,” meaning that imposing a duty of care to “inspect” virtual properties is, like most challenges, a business opportunity.

One argument against the spoofing example from defenders of §230’s status quo might be that Grindr could inspect for 10 spoofs of one Matthew Herrick but could not inspect for any spoofs of one out of a million Joe Smiths. But at least three responses should rebut this and similar generalizations that the volume of data for a given platform is too vast to “police.”

First, DeMeo’s examples (which are not exhaustive) indicate that an AI agent can analyze a lot more than two data points like first and last name. Second, sensible §230 reformers, including Dr. Franks and her colleague Professor Danielle Citron, recommend that immunity be conditioned on “reasonable content moderation practices,” not on 100% perfect results. Third, §230 immunity itself perpetuates the opaque management of ICSPs by dismissing even meritorious complaints before they reach the discovery phase.

Sites as Properties vs. Sites as Products

I focus on negligence in context to private property because it seems to be the most applicable and reasonable way to think about platforms where our visitation provides the essential benefit that makes platforms worth billions. Further, the duties to “inspect” and “make safe” strike me as more generalized (i.e., more likely to serve greater societal interest) than product liability, which can be limited by those voluminous, complicated terms of service nobody reads—and which are subject to change electronically. On the other hand, product liability extends to harm done to parties who are not “invitees,” as indeed Herrick was not a user of Grindr at the time Gutierez abused the site to induce assault.

In Herrick’s case, some of his product liability claims were found to fail on the merits, regardless of §230, but with specific reference to the spoofs used by Gutierez, the opinion of the Second Circuit states:

Herrick alleges that Grindr is defectively designed and manufactured because it lacks safety features to prevent impersonating profiles and other dangerous conduct, and that Grindr wrongfully failed to remove the impersonating profiles created by his ex boyfriend… Those claims are based on information provided by another information content provider and therefore satisfy the second element of § 230 immunity….It follows that the manufacturing and design defect claims seek to hold Grindr liable for its failure to combat or remove offensive third-party content, and are barred by §230.

Defenders of §230 status quo will argue the court gets it right in Herrick, which further supports advocacy for statutory reform based on sensible review of the contemporary internet in contrast to 1996. As Professor Olivier Sylvain writes in his paper Intermediary Design Duties:

Today, the largest online companies do not merely host and relay messages, uninterested in what their users say or do. They use behavioral and content data to engineer online experiences in ways that are unrelated to the charming interest in making connections. Some of the most successful companies, moreover, collect, analyze, sort, and repackage user data for publication in ancillary and secondary markets. This is how the CDA immunity doctrine, first developed by the courts two decades ago, is ill-suited to the world today.

What the opinion in Herrick exemplifies is that no principles of tort liability pass through the §230 shield when “Information [is] provided by another information content provider.” That is the crux of the §230 problem and our dysfunctional relationship with ICSPs in general. We have been conditioned to think of all communications as “information,” or worse, as “protected speech” even when the communication is arguably conduct intended to cause or induce physical, emotional, or economic harm—from doxxing women who speak their minds to exes seeking revenge to inciting political violence, etc.

Indeed, this is why Franks and Citron propose striking the over-broad word information in the statute and replacing it with the legally definable word speech. “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” their proposal states. It is challenging enough for, say, a victim of image based sexual abuse (IBSA) to present a claim against an ICSP at all, but this one word change in the statute would foster a more level playing field between the individual plaintiff and corporate defendant.

Statutory Revision Should Restore the Incentive to Care

If we agree generally that an ICSP should have a duty of care, experts and Congress can decide whether the general obligation is properly construed under product or property liability—or some combination of the two. But whichever path is most reasonable, ICSP owners should no longer be allowed to exploit ambiguous definitions of their platforms to perpetuate the harmful results of unconditional immunity. Further, sound §230 reform must acknowledge the most extreme cases in which ICSP owners deliberately foster or profit from harmful communications and should, therefore, be subjects of intentional tort liability claims.

Some defenders of §230 status quo will argue that suing the Grindrs of the world for damages won’t help solve the problem for the Herricks of the world, but this is unfounded. First, one purpose of tort law is to foster better conduct among especially commercial enterprises. It is an imperfect remedy for various reasons, but if it is wholly ineffective for altering ICSP conduct, why do they fight so hard to maintain the status quo? The answer, I believe, lies in the second, and more important point—that §230 has barred case after case from proceeding to the discovery phase.

Even where a plaintiff may not ultimately prevail and receive damages, discovery in a lawsuit is often how the public learns whether an operation’s practices are fair, safe, lawful, or even consistent with their own warranties. “Even when a plaintiff’s case fails on the merits, judi­cial engagement with the details of her claim helps to frame her suffering as a legible subject of public attention and governance,” writes Professor Douglas A. Kysar. [6]

For example, when whistleblower Frances Haugen testified in 2017 that Meta makes decisions based on profit over safety, the devil’s details underlying that exemplary statement might only be revealed by adjudicating a reasonable claim brought by a plaintiff directly harmed by Meta’s decisions. Instead, Section 230 short-circuits this process, providing unconditioned immunity that is not only uniquely tailored to one industry, but bizarrely applies to the owners of virtual properties where many citizens live half their very real lives.

Despite the length of this post, there are several aspects of Section 230 not discussed, including responses to various arguments for maintaining its status quo and discussion of specific cases that would likely have been adjudicated if not for the shield. Nevertheless, various hearings in Congress have signaled bipartisan fatigue with the status quo, especially where harm to children is involved. Whether that sentiment can be harnessed into reasonable reform is a fair question—especially in the current climate—but as Carrie Goldberg reiterated in one hearing, the Constitution requires that plaintiffs, even if they might lose, deserve their day in court.


[1] Cubby, Inc. v. CompuServe, Inc. and Stratton Oakmont, Inc. v. Prodigy Servs. Co.

[2] The statute states “… to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

[3] “The original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things on the Internet.” NPR citing Cox.

[4] A dig at John Perry Barlow’s Declaration of the Independence of Cyberspace.

[5] A lower standard duty of care is owed to “trespassers” or “licensees” who may visit a property, though Cabrera notes that courts have often collapsed the distinction between “invitees” and “licensees.”

[6] Franks, The Free Speech Industry, pp 70-71.

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

Section 230

Last week, the Third Circuit Court of Appeals issued an opinion regarding Section 230 of the Communications Decency Act. It may be the strongest affirmation to date that the statute does not provide a blanket liability shield for all social platforms regardless of their conduct. Specifically, §230(c)(1) only immunizes platforms for liability that may arise from other parties’ speech, not from the platform’s own speech. And although the platforms have sought to argue that their “recommendation” algorithms, which push content to users, do not constitute speech, the courts aren’t buying it.

In the case Anderson v. TikTok, the appeals court reversed the lower court finding that the platform was automatically immunized against a liability claim involving the death of a child who attempted one of the many dangerous “challenges” that appear on social media. In this case, Nylah Anderson, age 10, died by accidentally hanging herself when she tried the “Blackout Challenge,” which dared people to asphyxiate themselves until they passed out. At issue for TikTok is not the challenge itself, started by an unknown third-party, but the “For You Page” algorithm which “recommended” the challenge to Anderson. Judge Matey, in a strident concurrence with the circuit court opinion, writes the following:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Though the reference to St. Augustine implies a religious moralizing I might omit, Judge Matey’s accusation that social platforms host a “cauldron” of dangerous, illegal, and depraved material behind a veil of social good and constitutional rhetoric is indisputable. As a legal matter, had Anderson discovered the video challenge (e.g., via search), TikTok would likely be immunized by §230, but because a “recommendation” algorithm factored in the child’s conduct resulting in her death, this is an important distinction that could more clearly articulate a shift in judicial review of the statute and, we should hope, an overdue change in platform governance.

As Judge Matey further states in his concurrence, TikTok’s presumed immunity under §230 in this case is “…a view that has found support in a surprising number of judicial opinions dating from the early days of dial-up to the modern era of algorithms, advertising, and apps.” That view is properly dimming now, and by my reckoning, the Supreme Court will go where the Third Circuit went last week. In a pair of nearly identical cases, Gonzalez v. Google and Twitter v. Taamneh (2022), the plaintiffs, on behalf of victims of two ISIS-related terror attacks, sought to hold the platforms accountable for “recommending” ISIS recruiting videos. But because those claims relied substantially on meeting the standard for “aiding and abetting” under criminal law, the Court found little plausible claim for relief and, therefore, declined to address the question of §230 immunity.

But if Anderson (or a similar case) goes to the Supreme Court, I believe the justices will have little difficulty finding that a “recommendation” algorithm promoting a video challenge that led to a child’s death is a foundation for a liability case to proceed. As the Court stated in Taamneh, “When there is a direct nexus between the defendant’s acts and the tort, courts may more easily infer such culpable assistance.” In Anderson, with no other party acting as the direct cause of the child’s death, the facts are even simpler, revealing a clear nexus between the video challenge “recommended” by the platform and the accidental suicide. Further, this July, the Court held in the unanimous Moody v. NetChoice decision that social platforms “shape other parties’ expression into their own curated speech products.”[1] Under that rule, the Third Circuit finds that TikTok’s “recommendation” of the Blackout Challenge to Niyah Anderson plausibly constitutes the platform’s own speech, for which it may be held liable.

The reason I keep putting “recommended” in quotes is that at the time SCOTUS granted cert in the Taamneh and Gonzalez cases, I wrote a post opining that the courts, policymakers, et al. should take a jaundiced view of this too friendly term to describe an insidious function of social media. It is no longer controversial to say that platform operators manipulate what users see and hear, or that this manipulation can lead to disastrous results from disinformation campaigns in the political arena to drug-related deaths to suicide by little girls.

It is a familiar refrain that it takes a tragedy, or many tragedies, to change policy, and with the story of Nylah Anderson, and the many young victims she represents, we may finally see Big Tech’s hypocrisy on speech collapse under the weight of its own absurdity. The major platforms have played games with the First Amendment and §230 for nearly 20 years—conflating their business interests with users’ speech rights or asserting their own speech rights when necessary or asserting that nothing they do is their own speech—all depending on which potential liability the company seeks to avoid. Further, that confusion has not been helped in recent years by certain politicians who misstate the operation of the speech right to create political theater around allegations of bias.

Out of all that mess, it is notable that Justice Thomas, since at least 2020,[2] has restated the observation that online platforms will avail themselves of constitutional protection to engage in conduct like algorithmic “recommendation” but then invert the argument to shroud itself in the §230 shield. And then, the courts will stop a liability claim from even proceeding. As Congress, the Supreme Court, and now the Third Circuit have all reiterated, no industry in the country enjoys that kind of immunity, and perhaps this claim against TikTok will be the case that finally ends this unfounded and unreasonable privilege for online platforms.


[1] On a side note, this is reminiscent of the “selection and arrangement” doctrine in copyright law, which finds “expression” in the choices made by the author who engages in that conduct. All copyrightable expression is a form of speech.

[2] See dissent on the grant of certiorari in Malwarebytes v. Enigma.

Photo by: 

The Campaign to Defend Generative AI

generative ai

I have not written steadily about AI and copyright because, frankly, it’s exhausting. Not quite as exhausting as watching the state of the Republic overall, but almost as relentlessly incoherent and repetitive. For instance, Winston Cho for the Hollywood Reporter describes a PR and lobbying campaign by the tech coalition Chamber of Progress to defend the importance of generative AI (GAI). The article quotes founder and CEO Adam Kovacevich thus:  “Gen AI is a net plus for creativity overall. It’s expanding access to creative tools for more and more people and bypassing a lot of the traditional gatekeepers.”

That GAI may yield some beneficial tools for creators is plausible, but the whole “access” and “gatekeepers” rhetoric is a misguided anachronism from a group calling itself the Chamber of Progress. Perhaps “Confederacy of Tech Overlords” was too on the nose, but the generalized argument that GAI represents a “democratic” shift away from gatekeepers, stands on the rubble of experiments that have already failed. I doubt there is a professional creator left who hasn’t figured out that Big Tech’s promise to liberate them from traditional gatekeepers is like a human trafficker promising his next victim a job in a foreign country. Whatever was imperfect about the old models, the new models are more exploitative and hazardous for the average creator.

More precisely, while the alleged “liberation” from older distribution channels might have seemed attractive, GAI is about production, and I am confused as to who the “gatekeepers” would be on the production side of the equation. To the extent, say, Midjourney might enable me to illustrate or paint without any drafting or painting skills, the “gatekeeper” is who exactly? Nature failing to gift me with those skills? Or if we think big, and I can make a whole motion picture without ever turning on a camera, I still fail to see who the “gatekeeper” is in the overreaching promise from the tech industry.

Despite how cutting-edge and “essential” GAI is supposed to be, Big Tech has nothing fresh to say in its advocacy. The theme of “democratization” is the same weather-beaten argument they’ve been flogging for years, one that has proven disastrous for information and the state of real democracy—and which GAI can only make worse. Nevertheless, the Chamber of Progress campaign, as reported by Cho, seeks to promote a sweeping policy that AI developers should be broadly shielded from liability, including copyright infringement claims.

The question of copyright infringement for ingesting works for machine learning (ML) is currently at the heart of several lawsuits. I’ve lost track of them all, but arguably the most solid claim to date is New York Times v. OpenAI et al. because the evidence of copying (i.e., that what went into the model came out of the model) is so compelling. On the other hand, it is worth watching those cases where “reproduction” is less evident and, therefore, where the question may be more thoroughly addressed as to whether ML is a purpose that favors fair use of protected works.

As we have seen in defense of social platforms, Big Tech will spray the blogosphere with the term “fair use,” and copyright antagonists (mainly in academia) will echo the broad claim that of course ML is fair use. Notwithstanding the bugaboo that the fair use doctrine rejects the notion of a general exemption, I would argue that the case law points the other way, including the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. To the limited extent that opinion addresses the ML question at all, its reigning in of the “transformativeness” test is more likely to disfavor the AI developers. Big Tech’s claim is that GAI is broadly “transformative” as a technological accomplishment, but Warhol and other decisions reject such a sweeping interpretation of that aspect of fair use factor one.

Further, as argued in this post, I remain unconvinced that GAI necessarily advances the purpose of copyright to promote new authorship as a matter of doctrine. For instance, if a given work created by GAI cannot be protected by copyright, then the material is, by definition, not a work of “authorship.” As such, this purpose should doom a fair use defense, in my view. Regardless, Big Tech will not be satisfied with the outcomes of any lawsuits, even if the developers win some. What they want is blanket immunity for infringement liability and an affirmation that GAI is truly as important as they say it is. That’s why this paragraph in the Hollywood Reporter story caught my attention:

In comments to the Copyright Office, which has been exploring questions surrounding the intersection of intellectual property and AI, Chamber of Progress argued that Section 230 – Big Tech’s favorite legal shield – should be expanded to immunize AI companies from some infringement claims.

Why highlight that? Because the absence of legal foundation is telling. Not only does Title 47 Section 230 have nothing to do with copyright infringement, but both that law and its copyright cousin, Title 17 Section 512, address the subject of users uploading material to platforms. Neither law says anything about scraping the web to feed material into an AI model for the purpose of ML. Nevertheless, it is clear from reading the actual comments by Chamber of Progress to the Copyright Office that Big Tech recommends policymakers take lessons from both statutes to carve out new liability shields to support the advancement of AI.

Despite the fact that neither §512 nor §230 has proven effective in limiting copyright infringement or dangerously harmful material online, the Chamber of Progress comments reprise Big Tech’s unfounded talking points regarding both statutes. Written by counsel Jess Miers, the comments repeat the false allegation that §512 fosters rampant, erroneous takedowns and also argues that because of §230, “most UGC services go to great lengths to proactively clean-up awful content and provide a safe and trustworthy environment for their users.” Not only will my friends and colleagues fighting Image-Based Sexual Abuse, online hate, and scams be very surprised to learn that, but so will Congress.

One of the scant points of agreement on Capitol Hill these days is that lawmakers have grown weary of liability shields for Big Tech, which has done a poor job of mitigating the worst harms facilitated by their platforms. Section 230 is so ripe for amendment that I’m surprised the Chamber of Progress invoked it, let alone in comments to the Copyright Office which only deals with, y’know, copyright law. More broadly, though, when GAI implies myriad harms beyond copyright infringement, the last thing Congress should do is grant Big Tech more latitude to do whatever it wants in the name of “progress.”  We tried that approach. It sucks.