Child Safety May Lead the Charge on Platform Accountability

child safety

In my last post responding to the Chamber of Progress campaign for broad liability protections for generative AI developers, I noted that lawmakers are tired of blanket immunity for Big Tech. If the current legislative landscape is any indication, we may finally be at the leading edge of genuine accountability for the myriad harms caused by social platforms operating under the protection of 90s-era immunity regimes.

Yesterday, New York Governor Kathy Hochul signed into law the SAFE for Kids Act, designed to prohibit “addictive” social media algorithms from targeting minors. The legislation treats social media as a consumer product with defective qualities that cause poor physical and mental health outcomes for young people, and which are designed to be addictive. The law defines and “addictive feed” as follows:

“Addictive feed” shall mean a website, online service, online application, or mobile application, or a portion thereof, in which multiple pieces of media generated or shared by users of a website, online service, online application, or mobile application, either concurrently or sequentially, are recommended, selected, or prioritized for display to a user based, in whole or in part, on information associated with the user or the user’s device…” [emphasis added]

The ironically named NetChoice came out swinging on X, calling the New York law an unconstitutional violation of the speech right—and of course they did. But even if Big Tech mounts that legal challenge, I wouldn’t bet on it succeeding. If the argument is that the user of a platform has a First Amendment right to access material which may otherwise be restricted by this new law, that claim should be mooted by the platform’s act of “recommending” or “prioritizing” material in the first place.

As users, we see what the algorithms determine we should see based on data that can be learned about us, and this limitation on user choice mocks the assertion that social platforms are “open” forums for “speech.” For this and other reasons, the state’s narrowly tailored law with the purpose of protecting minors from the harms caused by the addictive (i.e., defective) qualities of a social media product should not be found offensive to the First Amendment.

New York attorney Carrie Goldberg represents a wide range of clients who have been harmed through online platforms—from sexual harassment and assault to kids obtaining Sodium Nitrate on Amazon for the purpose of committing suicide. Referring to herself as a proud co-founder (“Mama”) of the New York SAFE for Kids Act, Goldberg has long argued that online platforms may be held accountable through product liability regimes. In a recent tweet, she notes that it was her failed lawsuit against Grindr on behalf of Matthew Herrick that paved the way for this new legislation:

Carrie Goldberg tweet

Meanwhile on Capitol Hill, legislation with a similar focus may be ready to pass. The Kids Online Safety Act (KOSA) also proposes to alleviate platform addiction for minors and mandates changes in product design to mitigate a range of well-documented harms—from bullying and harassment to unwanted contact by adults seeking to exploit or abuse minors. Sponsored by Senator Blumenthal, KOSA has strong bipartisan and public support. Further, consistent with Goldberg’s “defective product” argument, U.S. Surgeon General Dr. Vivek Murthy proposes a warning label approach to social media, stating, “The mental health crisis among young people is an emergency…”

Frankly, I am not so sanguine on the premise that adults fare much better when it comes to social media use and self-mitigating the hazards of the “feed,” but passing new laws to address harms to children is a good place to start. Assuming KOSA does pass—and there are many other bills in motion—it may be time to declare that Big Tech’s free ride is finally over. Nobody is buying the “progress” and “free speech” rhetoric anymore, which is good because it was never true.


Photo

Influencer Responsibility Raises Many Questions With Few Answers

Attorney Carrie Goldberg posted an interesting thread on Twitter last week in response to a September 4 New York Times article written by Taylor Lorenz. Before I go on, if you are unfamiliar with Goldberg’s work, her Brooklyn law firm defends victims of online harassment, and she has deservedly become one of the nation’s thought leaders on the subject of internet platform responsibility and liability. As I said in my praise for her book Nobody’s Victim, if anyone is going to help sensibly reform the over-broad application of Section 230 to the benefit of its unintended victims, it will be women like Carrie. But for the purposes of this post, I will stay away from legal liability and stick to questions of social conduct.

The headline of Lorenz’s article poses what Goldberg called the “million-dollar question”: Are Influencers Responsible for the Behavior of Their Followers? The article focuses on a 17-year-old TikToker named Chris (@Donelij) who had been making what users of that platform call “reaction” videos. These are created with the app’s “Duet” function, which enables a creator to capture himself in a split-screen alongside a video posted by any other TikToker. In Chris’s case, he “duets” with videos that reveal some shocking or surprising twist so that his viewers get to watch his reaction when he shifts from approving smile to bemused deadpan. This recurring schtick earned @Donelij over two million followers.

But because Chris performed his deadpan bit in response to a number of transgender TikTokers, whose videos reveal some variation on a surprising gender switch, Chris’s videos apparently inspired some of his viewers to antagonize or threaten the LGBTQ TikTokers featured in the Duets. “Even though he never says a word,” writes Lorenz, “thousands of people have called [his videos] out for being homophobic. Young gay and trans TikTokers who have been featured in Chris’s reactions report they have suffered vicious harassment from commenters and in messages.”

Lorenz reports that, last week, both of Chris’s accounts were banned by TikTok due to “multiple community guideline violations.” Assuming the reaction videos and their associated fallout with the LGBTQ TikTokers was the reason for these terminations, Lorenz poses her thesis question about responsibility. To this, Goldberg offered a tweet thread describing her views regarding both the moral and legal responsibilities of creators, followers, and platforms, placing most of the burden, when harm is done, on platforms.

In general, I agree with Carrie. Many of us would like to reverse, or at least reassess, the longstanding assumption that online platforms are strictly neutral conduits, and that users are the only responsible parties for anything. At the same time, however, this particular example strikes me as difficult to reconcile with regard to questions of responsibility, though it does point to the fact that our frame of reference (i.e. old media) falls short when grappling with the true nature of social platforms. Let’s step back for a moment.

All speakers bear some measure of responsibility for the consequences of their speech. While I do not think the contours of that responsibility are easily generalized, it is morally incumbent upon any speaker to at least acknowledge that speech has the power to inspire conduct and to at least contemplate what that conduct might be. Regardless, there will always be jerks among listeners, who will infer, even from expressions of parody and satire, a coded endorsement of their intolerance or hatred for some individual or group.

Does that mean, for instance, that comedian Dave Chappelle, because he has LGBTQ material in his show, is responsible if someone interprets his jokes as permission to harass members of that community? If anyone is paying attention to Chappelle, he endorses no such thing. Quite the opposite. But because we are talking about individuals who would go out of their way to harass people in the first place, neither Chappelle, nor any other creator, can possibly account for every mental malfunction that transpires between their jibes and some listener’s behavior. (It isn’t J. D. Salinger’s fault that Catcher in the Rye was allegedly inspirational to at least two murders and one attempted murder.)

So, with regard to traditional, pre-social-media, experiences (whence most of our views are derived), I suspect the reasonable answer is that the influencer is not responsible for the conduct of the follower. Returning to the example at hand, then, it seems difficult to cite Chris’s minimalist comedy—shifting from smile to deadpan in response to various surprise videos—as an inducement to harassing the gay and trans people featured in his Duets.

Given the way TikTok works (akin to Instagram in which “following” simply means the follower will see a creator’s videos from time to time while scrolling), it seems more likely that Chris’s followers unavoidably include at least some individuals who already spend their time hating on the LGBTQ community, and that his reaction pieces simply provided the names of some new targets. Any asshole who would go out of his way to mock, shame, or threaten someone in this way is not suddenly moved to engage in that behavior by one guy making a relatively anodyne joke.

At the same time, the extent to which Chris believes his schtick conveys a negative view of gay and trans people is a moral calculus he ought to consider. My GenZ, TikTok-using daughter showed me how a “duetter” can choose to hide the screen name of the person in the video to which they are responding. So, it is fair to ask why Chris did not employ this option (if he did not), which does not make the “duetted” party impossible to identify but does make them harder to locate without additional effort.

Social platforms will naturally generate Venn diagrams that are complex variations on the unremarkable fact that certain relationships between speakers and followers are often coincidental rather than causal. A video that made the rounds several weeks ago depicted a bevy of bouncing Boomers, festooned in their Trumpanalia and partying on a city street to the song “Y-M-C-A.” And because it would be absurd to interpret this dyspeptic dance as a celebration of gay pride, it is reasonable to assume that the relationship between the Village People and these village idiots was merely coincidental.

But what is unique about social platforms, of course, is not that they offer a forum for response, but that they insist upon a response. That blank space awaiting our input practically taps its foot in anticipation of some offering to the altar of “engagement.” And for many who create and/or comment, both praise and scorn, depending who is doing which, can serve as encouragement, even of speech that is very ugly. When some clown writes “Die faggot” at the gay TikToker, and others pile on in kind, these users are all receiving instantaneous and 24/7 positive reinforcement for their terrible conduct. This is a psychological stimulus unique to social media. It cannot be compared to the relationship between influencers and followers in other contexts. And, yes, to Goldberg’s point, the platforms bear the ultimate responsibility for designing and monetizing was is essentially a digital narcotic. 

Almost every conversation about platform responsibility—many of which focus on the role of Facebook in contemporary politics—is a variation on the theme of policy measures. What can and should the platform do to moderate the content on its site? What can and should Congress do to oversee the influence of these platforms? The paradox inherent to those intertwined questions is self-evident. It seems hopeless to expect that the divisions sown by Facebook will be adequately addressed by a house divided.

But very few of our conversations about responsibility begin with the premise that social media, from a psychological perspective, is basically crack in the schoolyard. More specifically, because TikTok is predominantly a GenZer’s platform, many of the harassers referred to in Lorenz’s article are very likely minors. That does not wholly relieve them of responsibility for their conduct, but it colors the concept of “community standards.” Give a platform like that to millions of adolescents and tell them to play nice? What do we think is going to happen?

I do not know that the example of Chris’s reaction videos is especially helpful in this conversation. His content seems a bit too removed from the corresponding harm that was done, and it seems tame relative to the degree of ugliness that often flows freely on the internet. Further, it would be difficult to proscribe his videos according to some rule that would not apply to a great deal of parody and satire (though there may be more than is reported in Lorenz’s article).

As the title of this post admits, I do not think there are easy answers to these questions. It does seem that the rules on social media are acutely distinct from our pre-internet experiences, but defining those rules is another matter. Social media is a narcotic, and we all bear a measure of responsibility for its use. Meanwhile, it doesn’t seem quite right that, of all the parties involved, the digital drug-makers are left to profit from the best and worst consequences without any obligations.  

Carrie Goldberg’s “Nobody’s Victim”: Cyber-Policy is Not an Abstraction

During an exchange on this blog in 2014 with an individual named Anonymous—it must have been a very popular baby name at some point—I was told, “Yes, yes, David, show us on the doll where the Internet touched you, because we all know that all evil comes from there.”  That discussion was in context to the internet industry’s anti-copyright agenda, but the smugness of the response, lurking behind a concealed identity while making an eye-rolling allusion to sexual assault, is characteristic of the tech-bro culture that dismisses any conversation about the darker aspects of digital life.  In fact, I am fairly sure it was the same Anonymous who decided that I had “failed the free speech test” because I wrote encouragingly about the prospect of making the conduct generally referred to as “revenge porn” a federal crime.  

Those old exchanges, conducted in the safety of the abstract, came rushing into the foreground while I read attorney Carrie Goldberg’s book Nobody’s Victim:  Fighting Psychos, Stalkers, Pervs, and Trolls (Plume 2019).  Because Goldberg and her colleagues do not address conduct like “revenge porn” in the abstract, they deal with it as a tangible and terrifying reality.  It is at her Brooklyn law firm where the victims of that crime (and other forms of harassment and abuse) arrive shattered, frightened and suicidally desperate to escape the hell their lives have become—often with the push of a button.  These are people who can show us exactly how and where the “internet touched” them, and Goldberg’s book is a harrowing tutorial in the various ways online platforms provide opportunity, motive, sanctuary, and even profit for individuals who purposely choose to destroy other human beings.  

Nobody’s Victim reads like an anthology of short thriller/horror stories but for the fact that each of the terrorized protagonists is a real person, and far too many of them are children.  These infuriating anecdotes are interwoven with the story of Goldberg’s own transformation from a young woman nearly destroyed by predatory men to become, as she puts it, the attorney she needed when she was in trouble.  The result is both an inspiring narrative of personal triumph over adversity and a rigorous critique of our inadequate legal framework, which needlessly exacerbates the suffering of people targeted by life-threatening attacks—attacks that were simply not possible before the internet as we know it.

Covering a lot of ground—from stalking to sextortion—Goldberg tells the stories of her archetypal clients, along with her own jaw-dropping experiences, in a voice that pairs the discipline of a lawyer with the passion of a crusader. “We can be the army to take these motherfuckers down,” her introduction concludes, and “What happened to you matters,” is the mantra of her epilogue.  It is clear that the central message she wants to convey is one of empowerment for the constituency she represents, but the details are chilling to say the least.

Anyone anywhere can have his or her life torn apart by remote control—i.e. via the web.  All the malefactor really needs is basic computer skills, a little too much time on his hands, and a profoundly broken moral compass.  Psychos, stalkers, pervs, trolls, and assholes are all specific types of criminals in the “Carrie Goldberg Taxonomy of Offenders.”  For instance, the ex-boyfriend who uploads non-consensual intimate images to a revenge-porn site is a psycho, while the site operator, profiting off the misery of others, is an asshole

As Goldberg notes in Chapter 6, by the year 2014, there were about 3,000 websites dedicated to hosting revenge porn.  That is a hell of a lot of guys willing to expose their ex-girlfriends to a range of potential trauma—these include public humiliation, job loss, relationship damage, sexual assault, PTSD, and suicide—simply because the girl/woman broke off the relationship.  This volume of men engaging revenge porn does seem to imply that the existence of the technology itself becomes a motive or rationale for the conduct, but that is perhaps a subject to explore in a future post. 

One theme that comes through loud and clear for me in Nobody’s Victim—particularly in context to the editorial scope of this blog—is that the individual conduct of the psychos, et al is only slightly less maddening than our systemic failure to protect the victims.  As a cyber-policy matter, that means the chronic misinterpretation of Section 230 of the Communications Decency Act as a speech-right protection and a blanket liability shield for online service providers. 

Taking on Section 230

Goldberg’s most high-profile client Matthew Herrick was the target of a disgruntled ex-boyfriend named Juan Carlos Gutierrez, who tried, via the gay dating app Grindr, to get Herrick at least raped, if not murdered.  By creating several Grindr accounts designed to impersonate Herrick, Gutierrez posted invitations to seek him out for rough, “rape-fantasy” sex, including messages that any protests to stop should be taken as “part of the game.”  Hundreds of men swarmed into Herrick’s life for more than a year—appearing at his home and work, often becoming verbally or physically aggressive upon discovering that he was not offering what they were looking for.

With Goldberg’s help, Herrick succeeded in getting Gutierrez convicted on felony charges, but what they could never obtain was even the most basic form of assistance from Grindr.  You might think it would be at least common courtesy for an internet business to remove accounts that falsely claim to be you—particularly when those accounts are being used to facilitate criminal threats to your safety and livelihood.  In fact, the smaller dating app Gutierrez had been using called Scruff eagerly and sympathetically complied with Herrick’s plea for help.  But Grindr told him to fuck off by saying, “There’s nothing we can do.” 

Herrick, through Goldberg, sued Grindr for “negligence, deceptive business practices and false advertising, intentional and negligent infliction of emotional distress, failure to warn, and negligent misrepresentation.”  They lost in both the District Court and in the Second Circuit Court of Appeals, principally because most courts continue to read Section 230 of the CDA as absolute immunity for online service providers.  This cognitive dissonance, which chooses to ignore the fact that a matter like Herrick’s plight is wholly unrelated to free speech, is emphasized in the Electronic Frontier Foundation’s amicus brief filed in the appeal on behalf of Grindr… 

Intermediaries allow Internet users to connect easily with family and friends, follow the news, share opinions and personal experiences, create and share art, and debate politics. Appellant’s efforts to circumvent Section 230’s protections undermine Congress’s goal of encouraging open platforms and robust online speech.

Isn’t that pretty?  But what the fuck has any of it got to do with using internet technologies to impersonate someone; to commit libel, slander, or defamation in his/her name; to deploy violent people (or in some cases SWAT teams) against a private individual; or to get someone fired or arrested—and all for the perpetrator’s amusement, vengeance, or profit?  None of that conduct is remotely protected by the speech right, and all of it—all of it—infringes the speech rights and other civil liberties of the victims.  Perhaps most absurdly, organizations like EFF choose to overlook the fact that the first right being denied to someone in Herrick’s predicament is the right to safely access all those invaluable activities enabled by online “intermediaries.”    

No, Grindr did not commit those crimes, but let’s be real.  What was Herrick asking Grindr to do?  Remove the conduits through which crimes were being committed against him—online accounts pretending to be him.  Scruff complied, and I didn’t feel a tremor in the free speech right, did you?   If we truly cannot make a legal distinction between Herrick’s circumstances and all that frilly bullshit the EFF likes to repeat ad nauseum, then, we are clearly too stupid to reap the benefits of the internet while mitigating its harms.  

Suffice to say, a fight over Section 230 is indeed brewing.  As it heats up, Silicon Valley will marshal its seemingly endless resources to defend the status quo, and they will carpet bomb the public with messages that any change to this law will be an existential threat to the internet as we know it.  There is some truth to that, of course, but the internet as we know it needs a lot of work.  Meanwhile, if anyone is going to win against Big Tech’s juggernaut on this issue, it will be thanks to the leadership of (mostly women) like Carrie Goldberg, her colleagues, and her clients.  

It is an unfortunate axiom that policy rarely changes without some constituency suffering harm for a period of time; and those are exactly the people whose stories Goldberg is in a position to tell—in court, in Congress, and to the public.  If you read Nobody’s Victim and still insist, like my friend Anonymous, this is all a theoretical debate about anomalous cases, largely mooted by the speech right, there’s a pretty good chance you’re an asshole—if not a psycho, stalker, perv, or troll.  And that clock you hear ticking is actually the sound of Carrie Goldberg’s signature high heels heading your way.