Podcast: David Golumbia Talking Facebook & Fascism

In this episode, I speak with David Golumbia, author and associate professor of digital studies, American literature, literary theory, philosophy, and linguistics at Virginia Commonwealth University. I asked Golumbia to join me after reading his blog post published on October 20th in which he asserts that Facebook is not just dropping the ball when it comes to curbing hate on its platform but that, in his words, Facebook Loves Fascism.

Facebook’s “Screw it Let’s Talk Astrology” ad, part of its Groups campaign.

Episode Contents

  • 00:00:55 – David Golumbia background.
  • 00:03:24 – Facebook loves fascism.
  • 00:08:24 – Defining “right” vs. proto-fascism.
  • 00:11:36 – Paths to authoritarianism.
  • 00:13:50 – mysticism and fascism.
  • 00:18:56 – Facebook’s astrology  TV spot.
  • 00:23:48 – More subtle forces driving division.
  • 00:32:02 – Facebook is too good for democracy.
  • 00:36:32 – Better/more information is not a solution.
  • 00:45:11 – “Educate yourself.”
  • 00:48:50 – Considering outcomes.
  • 00:54:05 – Rapidly changing narratives.
  • 00:56:25 – Latent extremism let out of the box.
  • 01:00:35 – What do Facebook et al really want?
  • 01:07:06 – The Big Tobacco analogy.

On Fixing Social Media: Why Fear Unintended Consequences?

In an excellent post on the blog Librarian Shipwreck, the author reminds us to take a more expansive view of the so-called Facebook problem. The article lands direct hits on most of the big nails (for instance, that we cannot trust Facebook to fix Facebook), but perhaps its most critical observation is the one about a difficult conversation we are not having at all.

As mentioned in my recent post, it is hard to imagine that Congress will not soon adopt legislation prohibiting social platform practices which are believed to directly aggravate health hazards among teens and tweens. That’s where the “Big Tobacco” analogy holds up, but also (I suspect) where it ends. Mitigating specific dangers, like algorithms that foster platform addiction or removing disinformation and conspiracy peddlers, is all necessary, but also low-hanging fruit on the edges of a dense, untamed grove into which few of us wish to venture. As Librarian puts it:

Too often it seems that we are singling out companies like Facebook for invective so that we don’t actually have to talk about our society’s reliance on computers and the Internet. Thus, Facebook gets held up as the scoundrel that is responsible for quashing the utopian potential of computers and the Internet—a potential that will be surely redeemed by the arrival of Web3. Yet the fantasies about Web3 sound very similar to the fantasies that originally surrounded Web 2.0 which in turn sounded a heck of a lot like the fantasies that had surrounded the original Web which in turn sounded a heck of a lot like the fantasies that were first spun out about personal computers which in turn sounded a heck of a lot like the fantasies that were first spun out about computers. The danger here is that we are vilifying Facebook (villain though it surely is), to save us from having to think more deeply about computers and the Internet.

If I may be so rude as to compress that:  Librarian makes the unimpeachable argument that Bullshit 3.0 is just a faster version of Bullshit 2.0. The bullshit in this case is the belief that the internet is, or ever was, something transcendent. Because at the same time that Barlow was scribbling the hubristic Declaration of the Independence of Cyberspace, money—a lot of money—was changing hands on the promise that somehow, someday, networked computers would be a more efficient way to sell soap. 90s-era conversations about targeted advertising asked whether consumers would tolerate the privacy invasions necessary to achieve those aims, and eventually, Google and Facebook proved that our transition into that brave new world could be almost frictionless.

The dream of an internet that operated ethically, yet beyond the laws of “weary nations”—a dream the utopians lament as having died sometime in the last several years—was never alive in the first place. That supposed goldilocks period, often referred to as the wild west, was not a brief glimpse of the web as it was meant to be, but an interlude of disarray and experimentation on the backend, while a whole generation played the role of lab mice on the frontend. And, sure, it seemed idyllic; the digital natives were all children.

It turned out that we were not very resistant to the internet crawling into our private lives while teaching the machines to “know us better than we know ourselves,” as former Google chairman Eric Schmidt liked to say. And arguably, we crossed that threshold so easily for two main reasons:  1) because the features and conveniences these companies provided were initially cool and then indispensable; and 2) because we did not believe, or even imagine, how hazardous the bargain would be.

It is an understatement to say that we are currently brimming with proposals to “fix” social media—especially Facebook—and that overstuffed suggestion box naturally provokes the industry lobbyists and “digital rights” groups to rally in defense of the status quo and to warn against “unintended consequences” that could result from one mandate or another. But this fearful narrative is predicated on the assumption that the status quo is acceptable, if not very good. On the contrary, social media’s CV comprises a dark litany of unintended consequences with virtually no oversight of the people running the experiment. And the items in bold on that list are nothing short of disastrous.

Who really anticipated that when we started connecting with old friends and sharing snapshots, that we were feeding data into a machine that could, and would, be used to foment a genocide in Asia or animate enough conspiracy theory to rattle the foundations of liberal democracy worldwide? Every problem caused by social media is an unintended consequence. At least it better be. As whistleblower Frances Haugen opined in her testimony on Capitol Hill, “I don’t think at any point Facebook set out to make a destructive platform.”

That’s probably true. So, if the toxic results of social media are unintended, let’s not be too timid about whatever new unintended consequences may result from efforts to address those problems. To Librarian’s point, we should instead step back, rewrite the premise, and have that “deeper conversation about computers and the internet” by rejecting the belabored lexicon of superlatives used to describe cyber life as something approaching the spiritual. It isn’t. It never was. And as a putative catalyst to “make democracy work better,” it’s a total bust. But to be fair, it is a pretty sophisticated way to sell soap.


Photo by: evgenyyjamart

Facebook and Big Tech’s “Big Tobacco” Moment

In response to the breaking news on Sunday that Facebook’s latest, and perhaps most consequential, leaker identified herself as former employee Frances Haugen, the questions are being asked once again:  How much do we blame Facebook, and for what shall it be blamed? For instance, in response to the allegation that the social platform played a role in the insurrection of January 6—both as an amplifier of disinformation and as a communications hub for some of the premeditated actions of that day—spokesperson Nick Clegg responded that it is “ludicrous” to blame Facebook. “The responsibility for the violence of Jan. 6 lies squarely with the people who inflicted the violence and those who encouraged them, including President Trump,” Clegg told CNN.

Clegg is dutifully responding to a straw man by reframing the accusation, as if Facebook were being accused of direct responsibility for the assault on the Capitol. In reality, of course, the company is accused, most recently by Haugen, of either ignoring or obfuscating evidence that its operational decisions are conducive to terrible outcomes for both individuals and whole societies. The company has allegedly engaged in willful blindness with respect to its role in aggravating different forms of suicidal tendencies—among teenagers being negatively affected by Instagram, and among adults negatively influenced by disinformation to the point of assaulting the constitutional order of the United States.

Haugen, who testified with tremendous poise on Tuesday before the Senate Commerce Committee, is a data scientist initially hired by Facebook as a member of the “civics integrity team.” She leaked tens of thousands of documents and stepped into the light, at considerable personal risk, with the intent to prove to legislators, federal agencies, and the public that when Facebook leadership is presented with evidence that its operational decisions cause harm, it will consistently choose profit over the mitigation of that harm. “Haugen has also detailed how she says Facebook quickly disbanded its civics integrity team—responsible for protecting the democratic process and tackling misinformation—after the 2020 U.S. election. Shortly afterward, came the Jan. 6 insurrection at the U.S. Capitol, in which organizers used Facebook to help plan,” writes Jaclyn Diaz for NPR.

That Facebook will behave like many other corporations (i.e. protect its bottom line) is not a revelation. At least, it shouldn’t be. Neither should there be any doubt that we are still wandering uncharted territory when a private company needs a division to be “responsible for protecting the democratic process and tackling misinformation.” Haugen’s testimony that Facebook maintained such a unit for the shortest time possible is damning, but the fact that we have collectively and voluntarily ceded so much power to a social media company is the bigger problem. And many of the consequences of that transformation cannot wholly be fixed by “fixing” Facebook.

The bipartisan committee members who questioned Haugen sounded unanimous in their intent to take legislative action soon, especially in response to evidence that Facebook is aggravating health risks to teens and tweens. Senators Blumenthal and Markey have already introduced the KIDS Act, which would proscribe the use of various “interface elements” that would manipulate a minor’s experience on a given platform. In that sense of “fixing,” the Big Tobacco metaphor applies because we can associate Facebook’s lack of transparency with identifiable health risks like eating disorders and depression. Meanwhile, in terms of our collective mental health as a society, I am not sure why the same prohibitions should not exist for adult users, who also do not recognize that social media is a narcotic—one that can produce good feelings even from very bad conduct.

Just yesterday, I saw that a woman, whose work I admire on constitutional issues, was harassed on Facebook by a stranger who did not engage her to debate the Second Amendment but merely to unpack his favorite sexist pejorative and tell her to kill herself. If the incident were reported, Facebook is unlikely to cancel the guy’s account, especially when there are tens of millions of customers just like him. So, not only has the great “information revolution” failed to produce a more nuanced—let alone historically informed—discussion about 2A et al, but Facebook exacerbates the worst behaviors by providing users with the little dopamine hit that comes from self-righteous, remote-control harassment.

It was not very long ago that examples like this would elicit a big eyeroll from the bro-culture of what we used to call netizens—not only because the conduct was presumed to be anomalous, but because cyberspace was presumed to be innocuous. Just words rather than sticks and stones. That was false. It was clear to many observers that the increase in anti-social and indecent conduct online was spilling over into the so-called real world. The boundary between clicks and sticks was steadily being eroded and, as it became clear on January 6, that boundary no longer exists at all for many of us.

Every time Zuckerberg or someone representing Google or Twitter or the EFF et al has asserted free speech as the rationale for an unregulated, barely moderated internet, they have been making the argument, however unwittingly, that anarchy works. Let everything flow, and people will make rational choices, and the good will outweigh the bad. That was the prevailing argument before 2016 and the so-called techlash, and it is an argument which is still being revived despite all evidence that, as a social experiment, it has been a disaster.

Miss Haugen’s testimonies are compelling and will likely be catalytic to long-overdue change at Facebook and elsewhere in the industry. The most significant discussion to emerge this week may be the proposals, including by FCC Chairman Wheeler, to create a new federal agency charged with oversight of major internet platforms. Whatever comes next, I think the era of laissez-faire appears to be over for Big Tech, and that is at least a step in the right direction.