Professor Citron Proposes Civil Remedies for Violations of Intimate Privacy

At a panel hosted by The Reykjavik Dialogue,[1] during a discussion about law enforcement, justice, and sex discrimination, Mary Anne Franks, co-founder of the Cyber Civil Rights Initiative noted that when her organization asked perpetrators who had engaged in revenge porn what would have stopped them from doing it, the answer was almost universally, “If I thought I could go to jail for it.”

The act of distributing intimate, private images via the internet without permission of the persons depicted is a crime—one that causes ongoing harm to victims, including harassment and violence, destruction of interpersonal relationships, loss of employment opportunities, psychological disorder, and suicide. And thanks substantially to the efforts of Franks and her colleague at CCRI, Danielle Keats Citron, nearly every state has criminalized the act of nonconsensual distribution of intimate images; and a federal bill to do likewise, the SHIELD Act, passed the House in March as part of H.R. 1620.

But while these laws pave the way for prosecution of the individuals who engage in this conduct, they do nothing about removing these violations of intimate privacy from the websites hosing the material. And to make matters more complicated, “deepfakes” technologies make it relatively easy to depict just about anyone in intimate or sexually explicit material for which they were never actually photographed.

Citron Proposes Privacy Injunctions

In a new paper that addresses the nonconsensual distribution of both real and manufactured images, Citron proposes two interdependent legal mechanisms to overcome the hurdles to removing this kind of content from the internet, and she also discusses the First Amendment questions raised as both a constitutional and a cultural matter.

First, Citron argues that courts must be empowered with “clear legislative permission” to provide plaintiffs with injunctive relief by ordering sites “to remove, delete, or otherwise make unavailable intimate images, real or fake, hosted without written permission.” One might think this is common sense, or simply a matter of basic decency, but court orders to remove material of any kind have been assiduously opposed by internet platforms large and small, and with considerable legal and PR support from “digital rights” activists like the Electronic Frontier Foundation. (See post here and here about Google v. Equustek & Hassell v. Bird.)

The rationale usually argued in the blogosphere and the courts for refusal to remove any content is the First Amendment—a fallacy that now roils the public debate—but the legal foundation that has given the platforms the swagger to distort the speech and press rights has been the courts’ over-broad interpretation of Section 230 of the Communications Decency Act as a blanket immunity. Not only have platforms been shielded against being named parties to civil litigation, but 230 has been invoked as the reason to shield them even from injunctions that do nothing more than order the removal of harmful material. Naturally, when a web company cannot be held liable for anything, it’s very easy for its operators to call all content “speech” and tell the public that all platforms are inherently engines of free expression.

Thus, in order for the above-mentioned legislative permission to be effective, Citron argues, as she and Franks have in earlier papers, that, “Congress should amend Section 230 to make clear that platforms and search engines can be sued in cases seeking injunctive relief and attorney’s fees related to the removal of intimate images hosted without written consent.”

Citron acknowledges that the solution is not perfect, particularly because litigation directed at one incident on one platform does not address the likelihood that intimate images will be distributed across multiple sites; but she writes, “Victims need to know that society recognizes the damage to the dignity and intimate privacy of victims, that law can help mitigate the damage, that sites are not law-free zones, and that lawyers will represent them.”

If that sounds like Citron’s proposed remedies are more symbolic than remedial, I will echo her comparison to civil rights legislation and argue that we should not underestimate even the symbolism of law to effect widespread remedies by fostering cultural and behavioral change. Presumably, most people do believe the act of distributing intimate images without permission is wrong, whether for revenge or any other motive. So, it helps when the law says it’s wrong, too. But at the same time, Citron addresses a broader cultural phenomenon in which Americans in particular struggle with our brand of the speech right and the distinction between access to information and prurient curiosity.

As a constitutional question, when a law intersects rights like those enumerated in the First Amendment, it must be held to the standard known as strict scrutiny. This means that a statute must serve a compelling public interest and must achieve a narrow purpose that cannot be achieved through less restrictive means. Here, Citron notes that the state laws criminalizing the nonconsensual distribution of intimate images have already held up to constitutional challenges in Vermont, Illinois, and Minnesota, but she also discusses that gray area where the public’s right to know is often too easily conflated with general interest.”By my lights, there can be a vast difference between learning about a public official’s intimate information and seeing photographs or videos documenting it. That distinction is worth careful consideration,” Citron writes.

Agreed. Specifically, did the American public have a right to know that Rep. Katie Hill was intimately involved with a member of her staff and, allegedly, using marijuana? Yes. Even though I personally do not care much what an elected official does in her private life unless it directly intersects with the official role, those allegations are certainly news that voters have a right to know. But I agree with Citron that there is a moral line—I would say a chasm—between a news report about Hill’s conduct and the publication of her intimate images (albeit semi-redacted) on the site RedState.

Hill sued RedState owner Salem Media,[2] and the publisher was granted a motion to dismiss the complaint under California’s anti-SLAPP law,[3] with the court finding, in Citron’s words, that “the photos shed light on Hill’s fitness for office.” The hell they did. How the information about Hill’s conduct sheds light on her fitness for office is up to the voters, but the leaked photos were nothing more than RedState’s opportunity to earn revenue by pandering to the worst impulses of the electorate, which increasingly cannot distinguish between political discourse and tribal brutality. RedState’s publication of the photos is barely distinguishable from revenge porn disguised as political reportage.[4] And to add insult to injury, Hill had to pay $200,000 for Salem’s legal fees.

As Citron notes, “Most cases involving the nonconsensual disclosure of intimate images will not present close calls about the boundaries of the public’s legitimate interest.” And, of course, this is correct. Most individuals who engage in this kind of behavior are not even propaganda mongers, let alone journalists. But I do suspect the techbro culture of the internet, where perhaps the blurry lines we see on a RedState re. Hill or a Gawker re. Hulk Hogan, imply to those other bros who violate intimate privacy that what they are doing is not criminal. It is. And it is time for the laws to catch up to that reality.


[1] Renewing Activism to End Violence Against Women www.rekjavikdialogue.is

[2] Hill’s counsel is Carrie Goldberg, leading specialist in this area.

[3] Strategic Lawsuit Against Public Participation.

[4] To be clear, I would say the same thing about the publication of similar photos of Reps. Boebert or Greene for whom I have nothing but contempt.

Can We Hope to Sensibly Reform Section 230?

In a paper published in 2020, [1] scholars Danielle Keats Citron and Mary Anne Franks advocate a relatively modest and elegant approach to amending Section 230 of the Communications Decency Act of 1996—changes that would directly help the statute’s unintended victims—but it is difficult to imagine how any nuanced consideration of the 230 issue will make headway in the current political climate.

At one extreme, the Former Republican Party (FRP) has amped up “Repeal 230” into a buzzy talking point with no practical or legal merit whatsoever; while shouting from the other side of the vortex is the internet industry and its network of supposedly progressive groups, who insist that the status quo of 230 is the keystone in the entire internet ecosystem. One behavior these seemingly opposite forces have in common is that both have exploited the misconception that Section 230 has something to do with viewpoint neutrality. It does not. Neither by the letter nor the spirit of the law.

To recap, there are two main parts to Section 230 under the “Good Samaritan” clause. The first states that online service providers will not be considered “publishers” of material provided by other parties. So, whether you or I or the NYT posts something on Facebook that is potentially harmful, and also unprotected speech (e.g. defamation), Facebook is shielded from potential liability resulting from that material. The second part states that when a platform engages in moderation and removes “objectionable material,” this does not render the platform a potentially liable “publisher” either. And it does not matter whether “objectionable material” comprises illegal content (e.g. child porn) or simply material the platform proscribes according to its own terms of service.

Nothing in the 230 statute states, or even implies, that service providers are limited by the speech right—indeed, as private entities, it is their First Amendment right to moderate as they wish—or that they are obligated to maintain viewpoint neutrality as a condition of the liability shield. That said, it was the platform operators themselves who promoted the false narrative that social media sites are the shiny new “engines of speech” right up until 2016, when “objectionable material” (mostly in the form of dangerous misinformation) steadily became the largest plank in the platform of what used to be the Republican party. Meanwhile, the real victims of Section 230’s unintended consequences may continue to be ignored amid the storm of insanity encircling this one fragment of cyber law.

Simply put, Section 230 is the reason why online platforms may not be held liable when their operators host, or even encourage and monetize, any of the following:  nonconsensual pornography, child sexual abuse material (CSAM), libel and defamation, hazardous misinformation, organized hate groups, harassment, or incitements of violence. And while vested interests play rhetorical games with the allegedly blurry lines between speech and any of that material, Citron and Franks first advocate clarifying that ambiguity by striking the word information from part one of the statue and replacing it with the word speech.  “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” states their paper.

Unlike “information,” protected “speech” has a legal definition rooted in case law, and at least some of the aforementioned categories of material would never qualify as speech under legal scrutiny, while others (e.g. hate speech) would be subject to review on a case-by-case basis. Perhaps most importantly, what this single word change likely accomplishes for, say, victims of harassment, is that it would more frequently induce a platform to remove harmful material, either voluntarily or by court order, rather than choose to litigate to try to prove that the harmful content is protected speech. As things stand, almost everything online is presumed to be speech. So, if a party uses any intermediary, from Twitter to a dating app, to cause even severe harm to another party, the intermediary is under no obligation to provide relief by removing the content. And most courts have held that 230 supports this position.

Under this one-word revision, if a platform knowingly continues to host allegedly actionable material, the platform voids its presumption of immunity, which does not mean it is necessarily liable for any harm. A complainant still bears the burden to prove the merits of a complaint just like any other case, but the platform would not automatically be indemnified at the summary judgment phase of a case. Meanwhile, the only form of relief many complainants ever want is removal of the harmful content, and not necessarily a damage award from a platform that otherwise does the right thing.

In that regard, if a platform unknowingly hosts potentially actionable content, as almost any platform inevitably does, Citron and Franks advocate another modification to 230, requiring that a platform demonstrate that it maintains a “reasonable,” ongoing practice of removing objectionable material upon notice or independent discovery of the problematic content. [2] “If adopted,” their paper states, “the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices in the face of unlawful activity that manifestly causes harm to individuals.”

This reasonableness standard would presumably accomplish two things:  first, it would provide the many platforms operating in good faith with the kind of liability protection intended by Section 230; and second, it immediately voids the liability shield for those platforms that intentionally operate as Bad Samaritans. Sites that purposely trade in libel and defamation, nonconsensual pornography, harassment (and quite possibly hate-speech and incitements to violence) would no longer be able to duck behind the Vibranium shield they have been wielding to avoid being named parties in a litigation. In many cases, this requirement to demonstrate a “reasonable” moderation policy would probably obliterate the business models for sites that intentionally profit from the misery of others, and I fail to see a downside in that outcome.

Of course, amending 230 requires an act of Congress, and there’s the rub. Not only will Silicon Valley throw its considerable resources at campaigns to leave the statute untouched until doomsday, but step one proposed by Citron and Franks—replacing information with speech—runs head-first into the existential crisis we currently face as a nation. Political speech is paradigmatically protected speech, arguably the most sacred of all forms of protected speech. But at present, one party has decided that its political speech shall embrace an insurrection of lies, outlandish conspiracy theory, and even violence against the very foundation on which the speech right itself is written. Whether we survive that paradox is a much bigger question than internet governance, but for the everyday victims of Section 230, it would be grand if we could address what is legitimately wrong with this law.

[1] “The Internet as Speech Machine and Other Myths Confounding Section 230 Reform,” University of Chicago Legal Forum (12/01/2021). https://legal-forum.uchicago.edu/publication/internet-speech-machine-and-other-myths-confounding-section-230-reform

[2] As the paper states, this proposal originates with Citron and colleague Benjamin Wittes.


Vortex image by: sondem

Section 230 and Trump’s Legislative Circus

Recently, the law called Section 230 of the Communications Decency Act (1996) has featured in a political cacophony that is becoming more ridiculous since the day Twitter first presumed to label Trump’s disinformation for what it was. Now, the noise has continued to exacerbate legislative dysfunction down to the final hours in this toxic year.

After vetoing the 2020 National Defense Authorization Act (NDAA) because the must-pass legislation did not contain a rider to repeal Section 230, Trump then pivoted to making the same proposal (along with another unfounded investigation into election fraud) a condition of passing a broader COVID relief package favored by Democrats, but less so by most Republicans. As long as those riders are part of the spending increase bill in the Senate, Democrats cannot vote for it, which will presumably suit Leader McConnell and several other Republican senators just fine.

What any of this means with regard to Americans getting the financial assistance they need, or to Trump’s continued influence over the Republican party remains to be seen. But for sure, the president’s very fragile ego has elevated an arcane cyber law to prominence by grossly distorting its intent and meaning, and by injecting divisive partisanship into a policy matter where lawmakers might otherwise reach consensus.

Real Section 230 Problems

The ill effects of Section 230 have nothing to do with political speech bias and everything to do with harmful conduct like harassment, libel, sexual extortion, etc. that has too-often been shielded by the statute. These unintended consequences, akin to the DMCA Section 512 problem, are largely the result of the courts’ over-broad interpretations of Section 230, resulting in dismissals, incompatible with justice, of various civil claims.

Nowhere in American life are parties that contribute to, or profit from, harmful conduct automatically immunized against civil liability, except for internet platforms. And automatic, wholesale immunity was never the intent of Section 230. As described in this post, 230 was written to encourage platform moderation, but over the years, the conditional immunity it was meant to provide was steadily asserted by platform owners as grounds to reject nearly all moderation altogether—even the basic courtesy of removing material that is known to be harmful.

So, whether a site intentionally or unintentionally hosts material that is harassing, libelous, nonconsensual pornography, or content that may be otherwise actionable in the real world, platforms have almost never been forced by court order to be so much as helpful to victims of these crimes. As attorney Carrie Goldberg can describe in detail, her client Matthew Herrick was unable enjoin the dating site Grindr simply to demand that it remove posts made by another user with the explicit intent to cause Herrick to be physically harassed and quite possibly raped. All Grindr had to do was remove the posts, but it refused to do so on claims of protecting speech—a constitutional fallacy that is only possible because the courts have held 230 to be too broadly immunizing.

Consequently, Trump’s rhetoric on Section 230—lashing out at platforms like Twitter for presuming to label disinformation for what it is—has muddied the waters on a legal framework that otherwise requires sensible and humane review. Although Trump likely could not explain 230 to save his life, his gibbering amplified one of the most popular misconceptions about it:  that “viewpoint neutrality” is either the aim of the statute or a condition for maintaining a platform’s liability shield.

Neither of those premises is true, but it is worth remembering that it was the platforms themselves who promoted this false neutrality narrative long before the Trump administration put them in a moral bind of their own making. Every Big Tech PR message for the last 20 years has been one in which it is presumed to be axiomatic that internet platforms are enhancements to and defenders of the speech right. Neutrality and speech were the public rationales for laissez-faire moderation policies that just so happened to enable the big platforms to monetize all activity. Only when disinformation became the official word of a sitting president, and hate speech spilled over more prominently into hate crimes, did any of Silicon Valley’s leaders begin to wonder if they had made egregious errors in their systems or management practices.

Meanwhile, the outgoing president’s vindictive assault on Section 230 has largely been a PR gift to the companies he would like to hobble and to those ardent believers in the failed maxim that “more speech is the antidote to bad speech.” The Electronic Frontier Foundation published a post on December 9 entitled It’s Not Section 230 President Trump Hates, It’s the First Amendment. Naturally, it seized upon the Trump tantrum as an opportunity to incorrectly reiterate that 1) maintaining the status quo of 230 is synonymous with protecting speech online; and 2) all critics of 230 are hellbent on repeal as an assault on the First Amendment, just like Trump.

Real Section 230 Reform

On the contrary, while some reformers have advocated apolitical reasons for a repeal of Section 230, others recommend restoring the original intent through legal reform—a reform that begins by recognizing that the bad conduct shielded by 230 means that speech is not exactly protected as universal right on the internet in the first place. As scholars Mary Anne Franks and Danielle Citron, two of the most important thought leaders working on the 230 issue, describe in a paper published in February with the Boston University School of Law [1]:

Marginalized groups in particular, including women and racial minorities, have long battled with private censorial forces as well as governmental ones. But the unregulated internet — or rather, the selectively regulated internet—is exacerbating, not ameliorating, this problem. The current state of Section 230 may ensure free speech for the privileged few; protecting free speech for all requires reform.

Franks and Citron have made major contributions to legislative reform, addressing harms like nonconsensual pornography, and to our understanding of how Section 230, combined with “speech fundamentalism,” results in conduct like online harassment without consequence for the perpetrators or the facilitators. So, the implication that the president, or any elected official, is having his speech chilled by means of fact-checking, is blatant, privileged hypocrisy in contrast to what really happens to people who do not sit in seats of power …

Failing to address online abuse does not just inflict economic, physical, and psychological harms on victims—it also jeopardizes their right to free speech. Online abuse silences victims. Targeted individuals often shut down social media profiles and e-mail accounts and withdraw from public discourse. Those with political ambitions are deterred from running for office. Journalists refrain from reporting on controversial topics. Sextortion victims are coerced into silence with threats of violence, insulating perpetrators from accountability.

Rather than a piecemeal approach to reforming Section 230, Franks and Citron propose two broad remedies—one statutory, the other judicial—to ameliorate the inadvertent shield the law presently provides to bad actors. The statutory remedy is to clarify that 230 only applies to protected speech and not to the broader term “information,” which is the word that currently animates the immunity enjoyed by platforms.

In theory, this focus on protected speech might rein in Big Tech’s rhetorical agenda to define everything posted online as “speech.” As Franks and Citron recommend, if the statute is more clearly defined, the courts can distinguish protected speech from tortious conduct posing as speech. In fact, most of us can make this commonsense distinction without law degrees; but having said that, the speech bar is not an easy one to overcome by amending the 230 statute accordingly. For better or worse, protected speech can encompass some very bad conduct, and the legal remedies tend to require narrowly tailored statutes, outside the scope of 230, to prohibit the conduct itself.

For instance, as I was writing this post, Dr. Franks happened to tweet the news that the Minnesota Supreme Court upheld that state’s nonconsensual pornography law as constitutional, but it is worth noting that the court rejected the state’s assertion that the conduct was a new form unprotected speech. Instead, it held that the law served a compelling interest and was narrowly tailored to serve that purpose (i.e. strict scrutiny). It is also worth mentioning that defenders of Section 230’s status quo have generally opposed statutes prohibiting nonconsensual pornography.

In addition to possible statutory amendment to Section 230, Franks and Citron’s paper describes a judicial approach that would apply precedent understanding of “reasonableness” on a case-by-case basis to examine whether a platform has taken “reasonable” steps to remove or mitigate unprotected, harmful content from its servers. In practical terms, then, Matthew Herrick’s conflict with Grindr would not arise because 1) the posts at issue were not protected speech;[2] and 2) because Grindr’s refusal to remove the posts would likely not meet a “reasonableness” standard familiar to any court in comparable areas of law.

On that second point, Franks and Citron cite judicial principles sounding in, for instance, copyright law, which begs the question whether “reasonableness” could be more effectively applied under Section 230 than “knowledge” of infringement has been under Section 512. But I shall leave that question open for consideration in a future post.

In general, I would argue that a very compelling reason to close the Section 230 loopholes that allow site operators to shirk responsibility is the premise that opportunity becomes motive. If we ask, for instance, why there has been an increase in nonconsensual pornography, often perpetrated by some idiot ex-boyfriend with a gripe, we can blame the weak morals of the individual, misogyny in general, or a bottle of tequila and a bad day; but a key factor that cannot be ignored is that it is just too damned easy. The opportunity to cause someone harm—potentially much greater harm than might be contemplated or intended—with the tap of a few buttons only exists because certain platforms trade in misery while others simply practice depraved indifference to it. And that is the psychosis which needs to be addressed by legitimate Section 230 reform.


[1] The Internet as a Speech Machine and Other Myths Confounding Section 230 Reform. Link.

[2] Even worse, because the posts “spoofed” (i.e. pretended to be) Herrick, they were a form of coerced speech in addition to attempts to cause him physical harm.