In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing entitled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.

David Newhoff
David is an author, communications professional, and copyright advocate. After more than 20 years providing creative services and consulting in corporate communications, he shifted his attention to law and policy, beginning with advocacy of copyright and the value of creative professionals to America’s economy, core principles, and culture.

Enjoy this blog? Please spread the word :)