How the Supreme Court Made Life Harder for Victims of Cyberstalking

It was such a busy Summer that I never got a chance to write about the Supreme Court’s June decision in the cyberstalking case Counterman v. Colorado. The story caught my attention when legal scholar and president of Cyber Civil Rights Initiative Mary Anne Franks tweeted, “the Supreme Court has just decreed that stalking is free speech protected by the First Amendment if the stalker genuinely believes his actions are non-threatening. That is, the more deluded the stalker the more protected the stalking.” [1] The key facts as summarized in the opinion are as follows:

Billy Counterman sent hundreds of Face­book messages to C. W., a local singer and musician. The two had never met, and C. W. did not respond. In fact, she tried repeatedly to block him, but each time, Counterman created a new Facebook account and resumed contacting C. W. Several of his messages envisaged vio­lent harm befalling her. Counterman’s messages put C. W. in fear and upended her daily existence: C. W. stopped walking alone, declined so­cial engagements, and canceled some of her performances. C. W. even­tually contacted the authorities. The State charged Counterman un­der a Colorado statute making it unlawful to “[r]epeatedly . . . make[] any form of communication with another person” in “a manner that would cause a reasonable person to suffer serious emotional distress and does cause that person . . . to suffer serious emotional distress.”

Read accounts of people who have been cyberstalked, and the stories are often harrowing. The content of the stalker’s communication doesn’t even have to be threatening, though it usually gets there. Just knowing that somebody (usually a man) has selected you (usually a woman) as a target for unwanted attention can be unnerving to the point that it can have life-altering consequences including general anxiety, fear of movement, fear of speech, job and opportunity loss, and even suicide. That can be true even if the stalker doesn’t take or induce any action outside cyberspace, though many incidents that begin online eventually become physical and violent contact.

To be clear, no member of the Court defended (nor do I believe would defend) Counterman’s conduct. The question addressed was what legal standard should have been applied in the enforcement of the Colorado law to determine the threshold where the defendant’s speech is no longer protected by the First Amendment. At trial, the state court applied an “objective” standard to determine whether the content of the speech at issue would be perceived by a reasonable observer as a “true threat,” a term of art that encompasses one category of unprotected speech—threats of violence.

The Supreme Court majority held that Colorado erred by not applying a “subjective” standard, under which it must be shown that the defendant intended to threaten plaintiff or had reason to know that the speech at issue was threatening; and the Court further held that it would be sufficient to show that a defendant recklessly disregarded the threatening nature of his communications. Hence, Dr. Franks’s observation that the more unreasonable the cyberstalker the more likely his online harassment will be protected speech. And how many cyberstalkers are reasonable?

Sound Dissent by Justice Barrett

Noting that I do not have a deep knowledge of the relevant case law, Justice Barrett’s dissent (joined by Thomas) reads as the better argument—both as law and common sense. The dissent argues that the majority singled out “true threats” in this case for preferential treatment to fashion a “Goldilocks decision” (i.e., inventing a middle ground that is neither necessary nor consistent with precedent). “True threats do not enjoy First Amendment protection, and nearly every other category of unprotected speech may be restricted using an objective standard,” the dissent states.

The extent to which the Court departs from precedent is difficult to comment upon without studying all the underlying First Amendment case law, but Justice Barrett’s focus on “context” rings soundly as a rationale that an objective standard can maintain the balance between protected and unprotected speech. “…the statement must be deemed threatening by a reasonable listener who is familiar with the ‘entire factual context’ in which the statement occurs [citation omitted]. This inquiry captures (among other things) the speaker’s tone, the audi­ence, the medium for the communication…” Barrett writes.

Indeed, any target of online stalking knows instinctively that words as seemingly unthreatening as You look lovely today may indeed be threatening if, for example, the statement comes from a stranger or an angry, obsessive ex-husband or boyfriend. Weighing the legality of speech without context—not just online, but anywhere—is a half-baked analysis. For instance, “Vote for me or you won’t have a country anymore” delivered on the stump is protected hyperbole, while “Fight like hell, or you won’t have a country anymore” delivered to an angry mob ready to march to the Capitol is considered by many reasonable observers to be incitement.

Justice Barrett highlights the Colorado cyberstalking statute (and notes that other states have similar laws) as an example of a contextual, objective analysis in which juries are instructed to weigh the defendant’s communications in a five-factor test to thoroughly understand the nature of the speech.[2] “Each considera­tion helps weed out protected speech from true threats,” she writes, and again, this strikes me as the more rational approach to address the alleged crime at issue.

Further, the dissent argues that the majority leans heavily and improperly on the 1964 case New York Times v. Sullivan. There, the Court held that it is necessary to prove that a defendant showed reckless disregard for the known falsity of a statement in order for a public figure to obtain damages relief for libel or defamation. But citing subsequent case law from 1974 and 1985, Justice Barrett argues that Sullivan applies to public parties while, “A private person need only satisfy an objective standard to recover actual damages for defamation. And if the defamatory speech does not involve a matter of public concern, she may recover punitive damages with the same showing.” [Citations omitted]

Assuming the dissent is correct about the majority’s inapt reliance on Sullivan in this case, the public/private distinction is significant because a typical cyberstalking incident involves ordinary citizens rather than public figures—let alone “matters of public concern.” If someone tweets at Sen. Tuberville and calls him a sniveling, treasonous, ignorant weasel who should have been aborted, that is paradigmatically protected speech. Elected officials volunteer for public scorn as a bedrock principle of the First Amendment,[3] and it would be an offense to two of the amendment’s freedoms if it were sufficient to find some cohort willing to call that tweet a “true threat.” Thus, evidence of the speaker’s intent and ability to cause violence must be present before his speech may be considered unprotected.

By contrast, the cyberstalker who tells his target that he wishes she were dead or writes that her death is imminent or that he hopes she gets raped, etc. may not express a “true threat” by words alone, but in context, the messages can have the same effect as a “true threat.” Even facially innocuous communication can be used to make a private individual feel threatened, especially when she has no idea who she’s dealing with, or what his intent might be.

By the time the target of a cyberstalker turns to law enforcement for relief, she has usually suffered substantial harassment, fear for her safety, and some form of irreparable damage to her liberty and/or financial interests. In Counterman, the Court compounds these injuries by elevating the standard for punishing an alleged cyberstalker to one in which a jury must read the mind of the defendant to find that he both understood and recklessly disregarded the threatening nature of his communication. This sets the bar higher than necessary in cases where the speech at issue is of no public interest other than, in most cases, making it stop.

The Tech-Utopian Concept of the Speech Right Lives in this Case

Unsurprisingly, the Electronic Frontier Foundation filed an amicus brief for the petitioner in Counterman stating, “This Court should make clear that the definition of a true threat necessarily includes a subjective speaker’s intent to threaten.” True to form, the EFF inflated its brief with praise for the scope, scale, and cultural significance of social media; and it cites examples of violent terms or rhetoric, which may be interpreted as threatening but may still be protected. Notably, no variant of the word stalking appears in the EFF’s brief.

All that general discussion about the value of social media as an alleged free speech machine may be true in certain contexts, but it should be seen as irrelevant in regard to cyberstalking. Because here’s where the “digital rights” organizations err, and where the Court has now made matters worse:  cyberstalking is action more than it is speech. It may take the form of words and/or images, but the ongoing contact itself is intended to cause suffering, and very often, it succeeds in doing just that. As Dr. Franks put it, quoted in Reuters shortly after the decision:

It is deeply disappointing that the Supreme Court has chosen not only to allow stalkers to act with impunity, but to do so on the basis that stalking is free speech protected by the First Amendment. In doing so, they have sentenced victims of stalking to potentially lifelong sentences of terror, as well as increasing their risk of being killed by their stalkers.


If you or anyone you know is a target of cyberstalking the two best resources I know are Cyber Civil Rights Initiative  and the Carrie Goldberg Victims’ Rights Law Firm.

[1] Dr. Franks also offered some sharp comments about the joking around at oral arguments, reflecting insensitivity to the dangers and traumas experienced by targets of cyberstalking. https://twitter.com/ma_franks/status/1648724142198226946

[2] (1) the statement’s role in a broader exchange, if any, including surrounding events; (2) the medium or platform through which the statement was communicated, including any distinctive conventions or architectural features; (3) the manner in which the statement was conveyed (e.g., anonymously or not, privately or publicly); (4) the relationship between the speaker and recipient(s); and (5) the subjective reaction of the statement’s intended or foreseeable recipient(s).

[3] This is a reference to Sen. Tuberville’s holding up military promotions to protest the DOD’s healthcare policy vis-à-vis abortion.

Photo source by: SBArtsMedia

Professor Citron Proposes Civil Remedies for Violations of Intimate Privacy

At a panel hosted by The Reykjavik Dialogue,[1] during a discussion about law enforcement, justice, and sex discrimination, Mary Anne Franks, co-founder of the Cyber Civil Rights Initiative noted that when her organization asked perpetrators who had engaged in revenge porn what would have stopped them from doing it, the answer was almost universally, “If I thought I could go to jail for it.”

The act of distributing intimate, private images via the internet without permission of the persons depicted is a crime—one that causes ongoing harm to victims, including harassment and violence, destruction of interpersonal relationships, loss of employment opportunities, psychological disorder, and suicide. And thanks substantially to the efforts of Franks and her colleague at CCRI, Danielle Keats Citron, nearly every state has criminalized the act of nonconsensual distribution of intimate images; and a federal bill to do likewise, the SHIELD Act, passed the House in March as part of H.R. 1620.

But while these laws pave the way for prosecution of the individuals who engage in this conduct, they do nothing about removing these violations of intimate privacy from the websites hosing the material. And to make matters more complicated, “deepfakes” technologies make it relatively easy to depict just about anyone in intimate or sexually explicit material for which they were never actually photographed.

Citron Proposes Privacy Injunctions

In a new paper that addresses the nonconsensual distribution of both real and manufactured images, Citron proposes two interdependent legal mechanisms to overcome the hurdles to removing this kind of content from the internet, and she also discusses the First Amendment questions raised as both a constitutional and a cultural matter.

First, Citron argues that courts must be empowered with “clear legislative permission” to provide plaintiffs with injunctive relief by ordering sites “to remove, delete, or otherwise make unavailable intimate images, real or fake, hosted without written permission.” One might think this is common sense, or simply a matter of basic decency, but court orders to remove material of any kind have been assiduously opposed by internet platforms large and small, and with considerable legal and PR support from “digital rights” activists like the Electronic Frontier Foundation. (See post here and here about Google v. Equustek & Hassell v. Bird.)

The rationale usually argued in the blogosphere and the courts for refusal to remove any content is the First Amendment—a fallacy that now roils the public debate—but the legal foundation that has given the platforms the swagger to distort the speech and press rights has been the courts’ over-broad interpretation of Section 230 of the Communications Decency Act as a blanket immunity. Not only have platforms been shielded against being named parties to civil litigation, but 230 has been invoked as the reason to shield them even from injunctions that do nothing more than order the removal of harmful material. Naturally, when a web company cannot be held liable for anything, it’s very easy for its operators to call all content “speech” and tell the public that all platforms are inherently engines of free expression.

Thus, in order for the above-mentioned legislative permission to be effective, Citron argues, as she and Franks have in earlier papers, that, “Congress should amend Section 230 to make clear that platforms and search engines can be sued in cases seeking injunctive relief and attorney’s fees related to the removal of intimate images hosted without written consent.”

Citron acknowledges that the solution is not perfect, particularly because litigation directed at one incident on one platform does not address the likelihood that intimate images will be distributed across multiple sites; but she writes, “Victims need to know that society recognizes the damage to the dignity and intimate privacy of victims, that law can help mitigate the damage, that sites are not law-free zones, and that lawyers will represent them.”

If that sounds like Citron’s proposed remedies are more symbolic than remedial, I will echo her comparison to civil rights legislation and argue that we should not underestimate even the symbolism of law to effect widespread remedies by fostering cultural and behavioral change. Presumably, most people do believe the act of distributing intimate images without permission is wrong, whether for revenge or any other motive. So, it helps when the law says it’s wrong, too. But at the same time, Citron addresses a broader cultural phenomenon in which Americans in particular struggle with our brand of the speech right and the distinction between access to information and prurient curiosity.

As a constitutional question, when a law intersects rights like those enumerated in the First Amendment, it must be held to the standard known as strict scrutiny. This means that a statute must serve a compelling public interest and must achieve a narrow purpose that cannot be achieved through less restrictive means. Here, Citron notes that the state laws criminalizing the nonconsensual distribution of intimate images have already held up to constitutional challenges in Vermont, Illinois, and Minnesota, but she also discusses that gray area where the public’s right to know is often too easily conflated with general interest.”By my lights, there can be a vast difference between learning about a public official’s intimate information and seeing photographs or videos documenting it. That distinction is worth careful consideration,” Citron writes.

Agreed. Specifically, did the American public have a right to know that Rep. Katie Hill was intimately involved with a member of her staff and, allegedly, using marijuana? Yes. Even though I personally do not care much what an elected official does in her private life unless it directly intersects with the official role, those allegations are certainly news that voters have a right to know. But I agree with Citron that there is a moral line—I would say a chasm—between a news report about Hill’s conduct and the publication of her intimate images (albeit semi-redacted) on the site RedState.

Hill sued RedState owner Salem Media,[2] and the publisher was granted a motion to dismiss the complaint under California’s anti-SLAPP law,[3] with the court finding, in Citron’s words, that “the photos shed light on Hill’s fitness for office.” The hell they did. How the information about Hill’s conduct sheds light on her fitness for office is up to the voters, but the leaked photos were nothing more than RedState’s opportunity to earn revenue by pandering to the worst impulses of the electorate, which increasingly cannot distinguish between political discourse and tribal brutality. RedState’s publication of the photos is barely distinguishable from revenge porn disguised as political reportage.[4] And to add insult to injury, Hill had to pay $200,000 for Salem’s legal fees.

As Citron notes, “Most cases involving the nonconsensual disclosure of intimate images will not present close calls about the boundaries of the public’s legitimate interest.” And, of course, this is correct. Most individuals who engage in this kind of behavior are not even propaganda mongers, let alone journalists. But I do suspect the techbro culture of the internet, where perhaps the blurry lines we see on a RedState re. Hill or a Gawker re. Hulk Hogan, imply to those other bros who violate intimate privacy that what they are doing is not criminal. It is. And it is time for the laws to catch up to that reality.


[1] Renewing Activism to End Violence Against Women www.rekjavikdialogue.is

[2] Hill’s counsel is Carrie Goldberg, leading specialist in this area.

[3] Strategic Lawsuit Against Public Participation.

[4] To be clear, I would say the same thing about the publication of similar photos of Reps. Boebert or Greene for whom I have nothing but contempt.

Can We Hope to Sensibly Reform Section 230?

In a paper published in 2020, [1] scholars Danielle Keats Citron and Mary Anne Franks advocate a relatively modest and elegant approach to amending Section 230 of the Communications Decency Act of 1996—changes that would directly help the statute’s unintended victims—but it is difficult to imagine how any nuanced consideration of the 230 issue will make headway in the current political climate.

At one extreme, the Former Republican Party (FRP) has amped up “Repeal 230” into a buzzy talking point with no practical or legal merit whatsoever; while shouting from the other side of the vortex is the internet industry and its network of supposedly progressive groups, who insist that the status quo of 230 is the keystone in the entire internet ecosystem. One behavior these seemingly opposite forces have in common is that both have exploited the misconception that Section 230 has something to do with viewpoint neutrality. It does not. Neither by the letter nor the spirit of the law.

To recap, there are two main parts to Section 230 under the “Good Samaritan” clause. The first states that online service providers will not be considered “publishers” of material provided by other parties. So, whether you or I or the NYT posts something on Facebook that is potentially harmful, and also unprotected speech (e.g. defamation), Facebook is shielded from potential liability resulting from that material. The second part states that when a platform engages in moderation and removes “objectionable material,” this does not render the platform a potentially liable “publisher” either. And it does not matter whether “objectionable material” comprises illegal content (e.g. child porn) or simply material the platform proscribes according to its own terms of service.

Nothing in the 230 statute states, or even implies, that service providers are limited by the speech right—indeed, as private entities, it is their First Amendment right to moderate as they wish—or that they are obligated to maintain viewpoint neutrality as a condition of the liability shield. That said, it was the platform operators themselves who promoted the false narrative that social media sites are the shiny new “engines of speech” right up until 2016, when “objectionable material” (mostly in the form of dangerous misinformation) steadily became the largest plank in the platform of what used to be the Republican party. Meanwhile, the real victims of Section 230’s unintended consequences may continue to be ignored amid the storm of insanity encircling this one fragment of cyber law.

Simply put, Section 230 is the reason why online platforms may not be held liable when their operators host, or even encourage and monetize, any of the following:  nonconsensual pornography, child sexual abuse material (CSAM), libel and defamation, hazardous misinformation, organized hate groups, harassment, or incitements of violence. And while vested interests play rhetorical games with the allegedly blurry lines between speech and any of that material, Citron and Franks first advocate clarifying that ambiguity by striking the word information from part one of the statue and replacing it with the word speech.  “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” states their paper.

Unlike “information,” protected “speech” has a legal definition rooted in case law, and at least some of the aforementioned categories of material would never qualify as speech under legal scrutiny, while others (e.g. hate speech) would be subject to review on a case-by-case basis. Perhaps most importantly, what this single word change likely accomplishes for, say, victims of harassment, is that it would more frequently induce a platform to remove harmful material, either voluntarily or by court order, rather than choose to litigate to try to prove that the harmful content is protected speech. As things stand, almost everything online is presumed to be speech. So, if a party uses any intermediary, from Twitter to a dating app, to cause even severe harm to another party, the intermediary is under no obligation to provide relief by removing the content. And most courts have held that 230 supports this position.

Under this one-word revision, if a platform knowingly continues to host allegedly actionable material, the platform voids its presumption of immunity, which does not mean it is necessarily liable for any harm. A complainant still bears the burden to prove the merits of a complaint just like any other case, but the platform would not automatically be indemnified at the summary judgment phase of a case. Meanwhile, the only form of relief many complainants ever want is removal of the harmful content, and not necessarily a damage award from a platform that otherwise does the right thing.

In that regard, if a platform unknowingly hosts potentially actionable content, as almost any platform inevitably does, Citron and Franks advocate another modification to 230, requiring that a platform demonstrate that it maintains a “reasonable,” ongoing practice of removing objectionable material upon notice or independent discovery of the problematic content. [2] “If adopted,” their paper states, “the question before the courts in a motion to dismiss on Section 230 grounds would be whether a defendant employed reasonable content moderation practices in the face of unlawful activity that manifestly causes harm to individuals.”

This reasonableness standard would presumably accomplish two things:  first, it would provide the many platforms operating in good faith with the kind of liability protection intended by Section 230; and second, it immediately voids the liability shield for those platforms that intentionally operate as Bad Samaritans. Sites that purposely trade in libel and defamation, nonconsensual pornography, harassment (and quite possibly hate-speech and incitements to violence) would no longer be able to duck behind the Vibranium shield they have been wielding to avoid being named parties in a litigation. In many cases, this requirement to demonstrate a “reasonable” moderation policy would probably obliterate the business models for sites that intentionally profit from the misery of others, and I fail to see a downside in that outcome.

Of course, amending 230 requires an act of Congress, and there’s the rub. Not only will Silicon Valley throw its considerable resources at campaigns to leave the statute untouched until doomsday, but step one proposed by Citron and Franks—replacing information with speech—runs head-first into the existential crisis we currently face as a nation. Political speech is paradigmatically protected speech, arguably the most sacred of all forms of protected speech. But at present, one party has decided that its political speech shall embrace an insurrection of lies, outlandish conspiracy theory, and even violence against the very foundation on which the speech right itself is written. Whether we survive that paradox is a much bigger question than internet governance, but for the everyday victims of Section 230, it would be grand if we could address what is legitimately wrong with this law.

[1] “The Internet as Speech Machine and Other Myths Confounding Section 230 Reform,” University of Chicago Legal Forum (12/01/2021). https://legal-forum.uchicago.edu/publication/internet-speech-machine-and-other-myths-confounding-section-230-reform

[2] As the paper states, this proposal originates with Citron and colleague Benjamin Wittes.


Vortex image by: sondem