Committee Talks Sunsetting Section 230 to Prompt Action by Big Tech

Section 230

Yesterday, the House Energy & Commerce Committee held a hearing to discuss draft legislation that would sunset Section 230 of the Communications Decency Act on December 31, 2025. If passed, the law would start a countdown toward abolishing Section 230 with the real intent to force Big Tech to cooperate on meaningful reform. Said reform would seek to mitigate the worst harms facilitated by misapplication of the law as a blanket liability shield for online service providers (OSPs) who host user generated content (UGC).

The Committee heard from witnesses Carrie Goldberg, a prominent attorney for victims of cybercrime; Marc Berkman, CEO of Organization For Social Media Safety; and Kate Tummarello, Executive Director of Engine, an advocate for “pro-startup and pro-innovation policy.” Goldberg and Berkman testified in favor of the sunset proposal, and Tummarello testified against the bill, though she stated more than once that her constituents are not opposed to Section 230 reform.

Thus far, most congressional hearings about “holding online platforms accountable” have been political theater, and this hearing was no different, other than the fact that the sunset proposal is overtly theater. Committee members acknowledge that the goal is not to abolish Section 230, or at least not its original intent, but we shall see whether the sunset bill becomes law and, if so, whether it compels the tech giants to negotiate in good faith.

In the meantime, it would help if Congress would stop echoing Big Tech’s main talking point—namely that Section 230 is about free speech, let alone speech neutrality. While most Committee Members reflected understanding about the serious harms facilitated by the erroneous application of Section 230, a few Members made parenthetical comments about protecting speech, and Rep. Harshbarger (R-TN) opined that “liberal sites like Facebook” censor “conservative” content.

Aside from this recurring allegation being unfounded in fact, Section 230 has nothing to do with the speech right or with viewpoint neutrality. Indeed, if it did—if Congress wrote a law mandating content neutrality, THAT would be a violation of the First Amendment. As Goldberg stated during questioning, “The platforms are free to moderate however they want.” So, every time Congress mentions speech in context to Section 230, it only amplifies Big Tech’s big lie that their platforms are an “engine of free expression,” which is unhelpful to sensible amendment of the law.

To clarify that point, yes, platforms host a lot of expression, but the OSPs are not bound to foster content neutrality by the First Amendment, and they do no such thing by operation of their sites. It is a matter of record that the social platforms adjust their algorithms to push or demote content for users based on user behavior in a constant and dynamic interplay between the two. The goal of these operational decisions has nothing to do with the speech right—indeed one can argue they stifle speech in several ways—and everything to do with maximizing profitability for the platform.

Next, it would be great if Congress could keep its eye on the ball and remember that Section 230 reform is not about creating new direct liability for all online platforms for harm done by users. To put it bluntly, Section 230 reform is about instructing the courts to stop tossing out every claim and every prayer for injunctive relief, solely on the basis that the statute requires this result at summary judgment. That was never the intent of the law, but the courts’ conclusions to the contrary demand that Congress act.

Nevertheless, during Q&A with witnesses, some Members seemed either to mischaracterize 230 reform as new regulation or as opening the door to a flood of direct liability claims. Here, Tummarello, as a representative for startups, stressed the fact that small companies cannot compete with the giants by moderating every post and comment on their sites. Frankly, the tech giants can’t achieve this goal either, but this is part of the theater because the concern is only relevant if Section 230 is indeed abolished.

By contrast, reforming the law does not need to oblige every platform to catch every potentially harmful bit of content. Indeed, sensible and workable reforms have been proposed by, for instance, Danielle Keats Citron recommending small but significant changes to the language. The goal is to retain the original intent to shield “Good Samaritans” against wanton lawsuits while directing the courts to find that relevant facts can void the liability shield.

For example, one of Carrie Goldberg’s high-profile cases involved a man named Juan Gutierez who used the dating app Grindr to target Matthew Herrick for harassment, abuse, and physical violence. Gutierez created a fake account pretending to be Herrick and invited random men to find him and fulfill his “rape fantasies.” Section 230 has nothing to do with the conduct of Gutierez, who was convicted for his crimes, but the law shielded Grindr from even going to court, despite Goldberg presenting evidence that volitional conduct by the platform caused and exacerbated the harm to Herrick. In short, Goldberg et al. are simply asking Congress to instruct the courts to allow meritorious claims against OSPs to be litigated—just as with any other defendant operating any other type of business.

Equally frustrating in this regard is the importance of injunctive relief, and I was surprised not to hear it come up during the hearing. Amid all the talk about Section 230 “fostering innovation” by shielding startups from a flurry of lawsuits, people lose sight of the fact that a platform need not be directly liable, or even a named party to a suit, to simply do the right thing and remove harmful material upon request. Unfortunately, the culture and profit motives of OSPs too often resists removing any material ever, and Section 230 has prevented courts from ordering those removals to mitigate harm to victims.

Presumably, there will be some wailing and teeth-gnashing from the usual suspects who defend the status quo of the “internet as we know it.” The EFF already groused about the sunset proposal ahead of the hearing, and we’ll see who else joins that peanut gallery. Either way, it is frustrating to know that meaningful reform can be achieved by changing a few key words in the statute—words that would maintain the original intent of Section 230 but which would stop protecting platforms over people. As Carrie Goldberg testified, the Seventh Amendment demands that victims of sexual abuse, trafficking, drug-related scams, harassment, and other devastating harms must all have their day in court.


Image Source by: Budi49673

In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.

AI, Search, & Section 230

On May 18, the Supreme Court delivered opinions in Gonzalez v. Google and Twitter v. Taamneh, a pair of interrelated cases in which both plaintiffs sought to hold online platforms liable for hosting material meant to inspire acts of terrorism. Because the Court unanimously found in Taamneh that there was no basis in anti-terrorism law for liability (and therefore no claim for relief), it then declined to address the Section 230 question in Gonzalez, which was whether Google’s “recommendation algorithm” is sufficient to find contributory liability for the inciteful material being recommended.

Properly read, Section 230 shields OSPs from “publisher liability” but not from “distributor liability.” A distributor of allegedly harmful material may be liable when it knows, or has reason to know, the nature of the material and either affirmatively chooses to distribute it or willfully turns a blind eye to the potential harm and does nothing to stop it. Unfortunately, ever since 230 became law in 1996, the courts have generally read the law as a blanket shield for any OSP distributing any kind of material as long as it was uploaded by a user of the site and not by the site operators.

Plaintiff Gonzalez alleged that Google’s “recommendation” algorithm, designed to promote content based on the system’s interpretations of user behavior, played a crucial role in pushing ISIS propaganda toward the parties who eventually committed a mass shooting in Paris that resulted in the death of Nohemi Gonzalez. Plaintiffs argued that “targeted recommendations” are not properly shielded by Section 230, and to the extent one can read the tea leaves in oral arguments, justices as opposite as Thomas and Brown-Jackson may be sympathetic to this view.

For further reading in “Strange Bedfellows,” the amicus brief in Gonzalez filed by Senator Hawley echoes many of the same legal arguments in the brief filed by Cyber Civil Rights Initiative. Also, Senators Hawley and Blumenthal are at least publicly in synch on the need to correct the errors in Section 230. “Reform is coming,” Sen. Blumenthal declared in March. All of which is to say that there appears to be both bipartisan and multi-stakeholder consensus building around the idea that platforms can and should be held accountable for promoting harmful material.

Does AI-Enhanced Search Imply Liability?

Notably, one prong of Google’s defense in Gonzalez was that “recommendation” is analogous to search and that delivering search results cannot rise to the level of contributory liability. Whether the Court would agree with this comparison under full examination in a viable case remains an open question. But assuming the Court would not have sided with Google, what might it make of Google’s new Search Generative Experience (SGE)? Still in trial phase for users who choose to enable it, the AI-driven SGE could be the new mode of search, or (if it totally sucks) could tank Google’s core business. As James Vincent writes for The Verge:

… it’s the dynamics of AI — producing cheap content based on others’ work — that is underwriting this change, and if Google goes ahead with its current AI search experience, the effects would be difficult to predict. Potentially, it would damage whole swathes of the web that most of us find useful — from product reviews to recipe blogs, hobbyist homepages, news outlets, and wikis. Sites could protect themselves by locking down entry and charging for access, but this would also be a huge reordering of the web’s economy. In the end, Google might kill the ecosystem that created its value, or change it so irrevocably that its own existence is threatened. 

Hard to predict for sure, and I will not make the attempt. There are, of course, many potential hazards with AI-enhanced search, not the least being more virulent mutations of garbage results (as if misinformation needs any help). But in a Section 230 context, would the deployment of SGE as Google’s new search model increase the likelihood of its liability under the same legal arguments presented in Gonzalez? The “recommendation” algorithm is a form of AI, and if that level of platform influence could be sufficient to find liability, then presumably a more robust use of AI could result in a stronger allegation of liability.

On June 14, Senators Hawley and Blumenthal introduced a two-page bill that would make Section 230 immunity unavailable for service providers “if the conduct underlying the claim or charge involves the use or provision of generative artificial intelligence by the interactive computer service.’’ Presumably, this bill can be seen as performative along with other announcements from Congress that AI has their attention, with various Members promising not to be fooled again into allowing Big Tech to regulate itself. There’s a lot of “We’re on it” messaging coming from the Hill about AI, and we’ll see what comes.

In the meantime, perhaps there is something to the Hawley bill in light of the considerations in Gonzalez and the imminent release of SGE. At first, I sneered at the amendment because generative AI is primarily a tool of production, and Section 230 immunity has little or nothing to do with production. It doesn’t matter whether the harmful material at issue is produced with Midjourney or a box of crayons. But if a generative AI serves as the engine for a new mode of search (i.e., recommendation), then the language in the Hawley/Blumenthal amendment would seem to obviate the need to litigate the question presented in Gonzalez. Congress would be declaring that Google is not automatically shielded from liability.

Considering that we are far from resolving the damage done by the “democratization of information,” it’s tough to feel sanguine about the prospect of AI making search better rather than suck faster. On the other hand, if the adoption of AI in certain core functions of online platforms is a basis for Congress resetting the terms of liability, then perhaps service providers will discover a renewed interest in the original intent of Section 230—an incentive to remove harmful material, not to keep it online and monetize it.


Photo source by: sinenkiy