SHIELD Act Passes in the Senate

SHIELD

It’s been nearly ten years since I first heard the term “revenge porn” and wrote a speculative post inspired by then Rep. Jackie Speier’s bill to make the act a federal crime. Much has transpired since then, including the obsolescence of the term “revenge porn” and the progress of generative artificial intelligence (GAI), which has already changed the nature of nonconsensual pornography. Legislation is in the works to address GAI used for this purpose, but in the meantime, the Senate on Wednesday finally passed the bill known as the Stopping Harmful Image Exploitation and Limiting Distribution, or SHIELD Act.

If SHIELD becomes law, the conduct of distributing intimate images without permission will be a federal crime with penalties that include fines and prison sentences. This is a game-changer, both pragmatically and culturally—fostering equitable remedies for victims and reasonable deterrents to at least some who might engage in the conduct. Further it signals a more mature relationship to digital life, leaving behind the rhetoric and handwringing that new liabilities for new harms conducted through online platforms will lead to rampant censorship of protected speech.

A decade ago, the phenomenon called “revenge porn” was still relatively new, and there was little general understanding about its potential for causing harm—or why the term itself was a misnomer. Initially, the “revenge” part referred to mostly men lashing out at ex-girlfriends or ex-wives by disclosing intimate images which had originally been shared in private. Distribution included web platforms that solicit and display “revenge porn” where the perpetrator could find a virtual fraternity of anger bros adding degrading, threatening, and rape-themed comments to the unlawfully displayed images. But the term was problematic from a legal standpoint.

Thanks substantially to the work of Dr. Mary Anne Franks and Danielle Keats-Citron, in their capacities as legal scholars and leaders of the Cyber Civil Rights Initiative, legislation at the state and federal level is focused on the act of nonconsensual disclosure, and not the motive per se. Because the motives for disclosing intimate images vary from immature “kicks” to sextortion, it was essential that the cause of action should not be limited solely to an intent to cause harm

SHIELD criminalizes nonconsensual disclosure, either with an intent to cause harm or if harm is caused unintentionally. This includes “…psychological, financial, or reputational harm, to the individual depicted.” As I say, a lot has changed over the last decade, and sadly, there is now a preponderance of evidence that nonconsensual distribution of intimate imagery (NDII) causes a spectrum of harmful results, including professional opportunity and relationship loss, psychological trauma, harassment, threats, physical violence, and suicide. In fact, Cyber Civil Rights Initiative has recently adopted the term Image-Based Sexual Abuse (IBSA) to properly frame the nature of so-called “revenge porn.”

A decade ago, legislation like Rep. Speier’s was met with the predictable criticism that it would sweep too broadly, cause undue censorship online and chill the speech right. In fact, anti-IBSA legislation survived First Amendment challenges in five of the now 49 states that have such laws. In 2022, when the Indiana State Supreme Court upheld that state’s law, Dr. Franks stated, “Indiana is the fifth state supreme court to uphold the constitutionality of criminal prohibitions of image-based sexual abuse. It should now be completely clear that there is no First Amendment right to disclose private, sexually explicit images of another person without consent.”

Since 2015, the theory that these laws were unconstitutional violations of the speech right has not only been tested at the state level, but the fervent belief that everything online is protected speech has waned considerably. Mitigating harm online, especially anything involving sexual abuse and minors, is one of the few subjects of bipartisan agreement these days. The fact that SHIELD passed the Senate this month suggests to me that it will become law by the end of the year. It will be an essential step in protecting the mostly women and girls who are targeted for IBSA.


Image source:

In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.