In hearing with Big Tech, senators make headlines, but can they make headway?

On Wednesday, January 31, the Senate Judiciary Committee presided over a dramatic hearing titled Big Tech and the Online Child Sexual Exploitation Crisis. The gallery was filled with family members representing young victims of sexual exploitation, drug-related deaths, and adverse mental health effects of social media that can lead to chronic illness and suicide. The witnesses who provided testimony and faced often tense grilling by senators included Mark Zuckerberg, CEO of Meta; Linda Yaccarino, CEO of X Corp; Shou Chew, CEO of TikTok; Evan Spiegel, CEO of Snap Inc.; and Jason Citron, CEO of Discord Inc.

By now, many highlights have been published in the press and on social media, including Senator Graham’s opening salvo telling the witnesses they “have blood on their hands.” There was also Sen. Hawley’s rhetorical grilling of Zuckerberg, asking whether he had personally created a fund out of his billions to compensate any families. And then, there was Sen. Whitehouse, who stated quite simply, “We’re here because your platforms really suck at policing themselves,” thereby summarizing a bipartisan sentiment that has produced five bills passed by this committee alone.

Dramatic moments aside, though, what, if anything, will get done this year? As committee members themselves noted throughout the hearing, this is a road much travelled, and little has been accomplished, either through legislation or as voluntary measures by the platforms, to address the kind of harms at issue. Big Tech’s “tobacco moment” was supposed to be in 2021 when key witnesses and whistleblowers testified that, yes, social media platforms can cause harm to users, are designed to be addictive, and that industry executives put revenue ahead of safety.

Notwithstanding Senator Cruz and other Republicans blasting Mr. Chew over the valid but separate matter of TikTok’s alleged obligations to censor and/or provide information to the Chinese Communist Party, nearly every senator reiterated a theme of rare unanimity on the central issues before the committee. There is, of course, no political downside for either party when the issues involve children, sexual exploitation, suicide, and fentanyl, and the target is Big Tech. There should be no doubt that the intent to legislate is real, but several senators alluded to the platforms’ lack of cooperation and their lobbying power to avoid federal intervention.

For instance, among the bills cited and not wholly supported by online platforms, the SHIELD Act would criminalize the nonconsensual distribution of intimate visual depictions of persons—a subject that has been on the Hill since Rep. Speier first introduced a bill in 2015. Now, with advancements in AI tools that can be used to generate synthetic sexual material using the likeness of a real person (e.g., what happened to Taylor Swift), the issue is more complicated. And by my count, there are at least two House bills responding to AI as a method to achieve potentially more harmful results than the distribution of existing recorded material.[1]

Presumably, Congress will need to harmonize legislative efforts where there appears to be some redundancy in the intent to mitigate harm based on the nature of certain material and/or the means of production and distribution of that material. Moreover, the various issues raised in the hearing imply distinct forms of accountability (e.g., the design of a platform potentially harming mental health; the handling of material uploaded by users; or platforms being more transparent about negative effects).

In a future post, I will try to summarize all the proposed legislation designed to address specific harms caused or exacerbated by social media platforms. But one subject raised on Wednesday, and which must come first, is revision of Section 230 of the Communications Decency Act. As discussed here many times, Section 230 has been improperly read by the courts as a blanket immunity from civil litigation for online service providers, regardless of how irresponsibly the operators may address harmful material uploaded by a user of the platform.

Section 230 Front and Center

Sen. Graham declared that it’s time to repeal Section 230, while other senators were more moderated, alluding to revision of the law. Regardless, there should be little doubt that Congress supports the premise that online platforms must be subject to litigation to incentivize more effective cooperation in addressing various harms. Most immediately, revision of 230 must make clear that platforms are not exempt from court orders to remove material that is harmful to the aggrieved party.

One of the most infuriating aspects of misapplication of 230 to date is not simply that the platform is never liable for the harm (because it may not be), but that a platform can avoid complying with injunctive relief—often little more than having the basic decency to remove material that is shown to be harmful. As Sen. Whitehouse made clear, the court is the venue for determining liability and remedies, and several of his colleagues noted that it is simply absurd that one multi-billion-dollar industry is automatically excused from those procedures.

Thus, as a foundational matter, it seems essential that Section 230 is substantially revised to ensure that people, like the families represented at the hearing, can pursue legal action without having the court automatically dismiss the claim. Of course, sound reform of 230 must reject the rhetoric of some lawmakers, including Sen. Cruz, who have muddied the waters with unfounded and unhelpful allegations of platform political bias. If nothing else, alleged viewpoint bias is not a subject of Section 230, and if lawmakers really want to help the kids, they must remain focused on ensuring that a family can have its day in court.

So, as stated, we’ve been here before. Wednesday’s hearing provided a pretty good highlights reel, but let’s see if this year, it can finally lead to any tangible solutions.


[1] Preventing Deepfakes of Intimate Images Act, and the No AI FRAUD Act.

On Fixing Social Media: Why Fear Unintended Consequences?

In an excellent post on the blog Librarian Shipwreck, the author reminds us to take a more expansive view of the so-called Facebook problem. The article lands direct hits on most of the big nails (for instance, that we cannot trust Facebook to fix Facebook), but perhaps its most critical observation is the one about a difficult conversation we are not having at all.

As mentioned in my recent post, it is hard to imagine that Congress will not soon adopt legislation prohibiting social platform practices which are believed to directly aggravate health hazards among teens and tweens. That’s where the “Big Tobacco” analogy holds up, but also (I suspect) where it ends. Mitigating specific dangers, like algorithms that foster platform addiction or removing disinformation and conspiracy peddlers, is all necessary, but also low-hanging fruit on the edges of a dense, untamed grove into which few of us wish to venture. As Librarian puts it:

Too often it seems that we are singling out companies like Facebook for invective so that we don’t actually have to talk about our society’s reliance on computers and the Internet. Thus, Facebook gets held up as the scoundrel that is responsible for quashing the utopian potential of computers and the Internet—a potential that will be surely redeemed by the arrival of Web3. Yet the fantasies about Web3 sound very similar to the fantasies that originally surrounded Web 2.0 which in turn sounded a heck of a lot like the fantasies that had surrounded the original Web which in turn sounded a heck of a lot like the fantasies that were first spun out about personal computers which in turn sounded a heck of a lot like the fantasies that were first spun out about computers. The danger here is that we are vilifying Facebook (villain though it surely is), to save us from having to think more deeply about computers and the Internet.

If I may be so rude as to compress that:  Librarian makes the unimpeachable argument that Bullshit 3.0 is just a faster version of Bullshit 2.0. The bullshit in this case is the belief that the internet is, or ever was, something transcendent. Because at the same time that Barlow was scribbling the hubristic Declaration of the Independence of Cyberspace, money—a lot of money—was changing hands on the promise that somehow, someday, networked computers would be a more efficient way to sell soap. 90s-era conversations about targeted advertising asked whether consumers would tolerate the privacy invasions necessary to achieve those aims, and eventually, Google and Facebook proved that our transition into that brave new world could be almost frictionless.

The dream of an internet that operated ethically, yet beyond the laws of “weary nations”—a dream the utopians lament as having died sometime in the last several years—was never alive in the first place. That supposed goldilocks period, often referred to as the wild west, was not a brief glimpse of the web as it was meant to be, but an interlude of disarray and experimentation on the backend, while a whole generation played the role of lab mice on the frontend. And, sure, it seemed idyllic; the digital natives were all children.

It turned out that we were not very resistant to the internet crawling into our private lives while teaching the machines to “know us better than we know ourselves,” as former Google chairman Eric Schmidt liked to say. And arguably, we crossed that threshold so easily for two main reasons:  1) because the features and conveniences these companies provided were initially cool and then indispensable; and 2) because we did not believe, or even imagine, how hazardous the bargain would be.

It is an understatement to say that we are currently brimming with proposals to “fix” social media—especially Facebook—and that overstuffed suggestion box naturally provokes the industry lobbyists and “digital rights” groups to rally in defense of the status quo and to warn against “unintended consequences” that could result from one mandate or another. But this fearful narrative is predicated on the assumption that the status quo is acceptable, if not very good. On the contrary, social media’s CV comprises a dark litany of unintended consequences with virtually no oversight of the people running the experiment. And the items in bold on that list are nothing short of disastrous.

Who really anticipated that when we started connecting with old friends and sharing snapshots, that we were feeding data into a machine that could, and would, be used to foment a genocide in Asia or animate enough conspiracy theory to rattle the foundations of liberal democracy worldwide? Every problem caused by social media is an unintended consequence. At least it better be. As whistleblower Frances Haugen opined in her testimony on Capitol Hill, “I don’t think at any point Facebook set out to make a destructive platform.”

That’s probably true. So, if the toxic results of social media are unintended, let’s not be too timid about whatever new unintended consequences may result from efforts to address those problems. To Librarian’s point, we should instead step back, rewrite the premise, and have that “deeper conversation about computers and the internet” by rejecting the belabored lexicon of superlatives used to describe cyber life as something approaching the spiritual. It isn’t. It never was. And as a putative catalyst to “make democracy work better,” it’s a total bust. But to be fair, it is a pretty sophisticated way to sell soap.


Photo by: evgenyyjamart

Facebook and Big Tech’s “Big Tobacco” Moment

In response to the breaking news on Sunday that Facebook’s latest, and perhaps most consequential, leaker identified herself as former employee Frances Haugen, the questions are being asked once again:  How much do we blame Facebook, and for what shall it be blamed? For instance, in response to the allegation that the social platform played a role in the insurrection of January 6—both as an amplifier of disinformation and as a communications hub for some of the premeditated actions of that day—spokesperson Nick Clegg responded that it is “ludicrous” to blame Facebook. “The responsibility for the violence of Jan. 6 lies squarely with the people who inflicted the violence and those who encouraged them, including President Trump,” Clegg told CNN.

Clegg is dutifully responding to a straw man by reframing the accusation, as if Facebook were being accused of direct responsibility for the assault on the Capitol. In reality, of course, the company is accused, most recently by Haugen, of either ignoring or obfuscating evidence that its operational decisions are conducive to terrible outcomes for both individuals and whole societies. The company has allegedly engaged in willful blindness with respect to its role in aggravating different forms of suicidal tendencies—among teenagers being negatively affected by Instagram, and among adults negatively influenced by disinformation to the point of assaulting the constitutional order of the United States.

Haugen, who testified with tremendous poise on Tuesday before the Senate Commerce Committee, is a data scientist initially hired by Facebook as a member of the “civics integrity team.” She leaked tens of thousands of documents and stepped into the light, at considerable personal risk, with the intent to prove to legislators, federal agencies, and the public that when Facebook leadership is presented with evidence that its operational decisions cause harm, it will consistently choose profit over the mitigation of that harm. “Haugen has also detailed how she says Facebook quickly disbanded its civics integrity team—responsible for protecting the democratic process and tackling misinformation—after the 2020 U.S. election. Shortly afterward, came the Jan. 6 insurrection at the U.S. Capitol, in which organizers used Facebook to help plan,” writes Jaclyn Diaz for NPR.

That Facebook will behave like many other corporations (i.e. protect its bottom line) is not a revelation. At least, it shouldn’t be. Neither should there be any doubt that we are still wandering uncharted territory when a private company needs a division to be “responsible for protecting the democratic process and tackling misinformation.” Haugen’s testimony that Facebook maintained such a unit for the shortest time possible is damning, but the fact that we have collectively and voluntarily ceded so much power to a social media company is the bigger problem. And many of the consequences of that transformation cannot wholly be fixed by “fixing” Facebook.

The bipartisan committee members who questioned Haugen sounded unanimous in their intent to take legislative action soon, especially in response to evidence that Facebook is aggravating health risks to teens and tweens. Senators Blumenthal and Markey have already introduced the KIDS Act, which would proscribe the use of various “interface elements” that would manipulate a minor’s experience on a given platform. In that sense of “fixing,” the Big Tobacco metaphor applies because we can associate Facebook’s lack of transparency with identifiable health risks like eating disorders and depression. Meanwhile, in terms of our collective mental health as a society, I am not sure why the same prohibitions should not exist for adult users, who also do not recognize that social media is a narcotic—one that can produce good feelings even from very bad conduct.

Just yesterday, I saw that a woman, whose work I admire on constitutional issues, was harassed on Facebook by a stranger who did not engage her to debate the Second Amendment but merely to unpack his favorite sexist pejorative and tell her to kill herself. If the incident were reported, Facebook is unlikely to cancel the guy’s account, especially when there are tens of millions of customers just like him. So, not only has the great “information revolution” failed to produce a more nuanced—let alone historically informed—discussion about 2A et al, but Facebook exacerbates the worst behaviors by providing users with the little dopamine hit that comes from self-righteous, remote-control harassment.

It was not very long ago that examples like this would elicit a big eyeroll from the bro-culture of what we used to call netizens—not only because the conduct was presumed to be anomalous, but because cyberspace was presumed to be innocuous. Just words rather than sticks and stones. That was false. It was clear to many observers that the increase in anti-social and indecent conduct online was spilling over into the so-called real world. The boundary between clicks and sticks was steadily being eroded and, as it became clear on January 6, that boundary no longer exists at all for many of us.

Every time Zuckerberg or someone representing Google or Twitter or the EFF et al has asserted free speech as the rationale for an unregulated, barely moderated internet, they have been making the argument, however unwittingly, that anarchy works. Let everything flow, and people will make rational choices, and the good will outweigh the bad. That was the prevailing argument before 2016 and the so-called techlash, and it is an argument which is still being revived despite all evidence that, as a social experiment, it has been a disaster.

Miss Haugen’s testimonies are compelling and will likely be catalytic to long-overdue change at Facebook and elsewhere in the industry. The most significant discussion to emerge this week may be the proposals, including by FCC Chairman Wheeler, to create a new federal agency charged with oversight of major internet platforms. Whatever comes next, I think the era of laissez-faire appears to be over for Big Tech, and that is at least a step in the right direction.