Podcast: David Golumbia Talking Facebook & Fascism

In this episode, I speak with David Golumbia, author and associate professor of digital studies, American literature, literary theory, philosophy, and linguistics at Virginia Commonwealth University. I asked Golumbia to join me after reading his blog post published on October 20th in which he asserts that Facebook is not just dropping the ball when it comes to curbing hate on its platform but that, in his words, Facebook Loves Fascism.

Facebook’s “Screw it Let’s Talk Astrology” ad, part of its Groups campaign.

Episode Contents

  • 00:00:55 – David Golumbia background.
  • 00:03:24 – Facebook loves fascism.
  • 00:08:24 – Defining “right” vs. proto-fascism.
  • 00:11:36 – Paths to authoritarianism.
  • 00:13:50 – mysticism and fascism.
  • 00:18:56 – Facebook’s astrology  TV spot.
  • 00:23:48 – More subtle forces driving division.
  • 00:32:02 – Facebook is too good for democracy.
  • 00:36:32 – Better/more information is not a solution.
  • 00:45:11 – “Educate yourself.”
  • 00:48:50 – Considering outcomes.
  • 00:54:05 – Rapidly changing narratives.
  • 00:56:25 – Latent extremism let out of the box.
  • 01:00:35 – What do Facebook et al really want?
  • 01:07:06 – The Big Tobacco analogy.

Did Social Platforms Really Find a Moral Compass?

In 2012, I wrote a post called In Defense of (a little) Elitism, which was naturally criticized by some in the tech-utopian world for being, y’know, elitist.

The apparent good in this digital-age model — that it is populist — is also its own weakness when we look at results in various media.  Most obviously, it doesn’t take more than a glance at the effects of extreme populism on journalism to realize that we now have news tailored to every taste — conservative, liberal, alternative, user-generated, subversive, and just plain wacko. No one can argue that the consumer isn’t “getting what he wants, and for free,” but the democratization of journalism has broadened the concept to include literally anyone with a computer. 

At that time, the likes of Alex Jones, Richard Spencer, terrorist groups, channers, The Daily Stormer, et al were well into metastasizing narratives of hatred and conspiracy, but few in the mainstream were talking about that incipient disaster, failing to truly grasp how digital platforms were extending, rather than shrinking, the influence of these toxic forces. If anyone questioned the reasonableness of giving those voices free platforms, Big Tech and its network of well-funded cheerleaders insisted that banning, or even muffling, these incubators of hate would do more harm to “free speech” than whatever harm was being done by leaving them alone.  

That was before 2016, of course, when the tin-foil hats, racists, and misogynists were not merely invited into the mainstream by the Party of Trump, but they were put front and center. Now, Silicon Valley had a problem. Battle lines were being drawn for the existential survival of the Republic (without which, there is no speech right, by the way). The longstanding official policy of “platform neutrality” would soon prove untenable. Nevertheless, until very recently, if a platform was criticized for hosting toxic content, the boilerplate answer was usually something like the following:

While we do not condone [vile content], we are reluctant to play the role of censors or arbiters of truth… [filler bullshit]… protecting free speech…[more filler bullshit]…and we believe democracy thrives from a robust exchange of ideas…[concluding bullshit]. (See Mark Zuckerberg speech at Georgetown University, October 17, 2019.)

Last week, that tone shifted, not altruistically mind you, but because the standard rhetoric was becoming a financial liability. ADWEEK announced that Reddit would be purging several hate-speech laden subreddits, including fan pages named for Donald Trump. While this is welcome news to many, I would remind readers that when Steve Huffman, a co-founder of the platform, assumed the role of CEO in 2015, he announced plans at that time to clean up Reddit’s act. So, I assume it is in response to the apparent sluggishness of said cleanup that he stated, “I have to admit I have struggled with balancing my values as an American and around free speech and free expression with my values and the company’s values around common human decency.”

Call me a cynic, but Huffman’s equivocation can only be read one way:  that toxic content is, at last, bad for business. Because it was only due to pressure from the some of the largest advertisers, either threatening to cancel, or actually cancelling ad buys that suddenly made it much more difficult for the big platforms to sweep all the Nazis and other assorted haters under the rug they liked to call the “exchange of ideas.” Not that that claim was ever anything but gibberish. If hate speech and incitements to violence are “ideas,” these were vetted long before we had the internet, and there is no principle whereby a social platform owes the KKK fresh digital soil in which to grow new roots.

Concurrent with Reddit dropping 2,000 hate-mongering subreddits, CNN also reported that YouTube finally jettisoned the channels of white supremacists Richard Spencer and David Duke, one year after promising to do so. The news channel states …

“Last year, CNN Business found that one Nazi channel YouTube had deleted before was back up and making no attempt to hide itself or its connection to its previously banned accounts. The channel was first taken down in April 2018 in wake of a CNN investigation that found ads from over 300 companies and organizations running on YouTube channels promoting white nationalists, Nazis, pedophilia, conspiracy theories and North Korean propaganda.”

And finally, even the beleaguered Zuckerberg, whose relationship status with Donald Trump has been stuck on “It’s complicated,” finally caved (at least somewhat) to pressure from both major advertisers and his own employees.  The Washington Post reported

On Friday, Zuckerberg told employees in a live-streamed town hall that he was changing the company’s policy to label problematic newsworthy content that violated the company’s policies as Twitter does, a major concession amid the rising tide of criticism. He also said in the most explicit language ever that the company would remove posts by politicians that incite violence and suppress voting. Still, civil rights leaders said his assertions didn’t go far enough.

Facebook, Reddit, YouTube, and other platforms should have stopped providing aid and comfort to hate-mongers a long time ago, just because it was the right thing to do. But in the absence of actual principles, market pressure will suffice. In a broad sense, it is a hopeful sign that major corporations, despite some stumbling press releases, have recognized that there is no financial future when their brands are associated with the lingo of hatred and division. Especially because there is no sustainable nation in that agenda either.

This does not mean, of course, that the major internet platform managers have learned much of anything about the free speech folly they have perpetuated for the last two decades. Silicon Valley may appear to have located its moral compass last week (because it happened to be sitting on top of its wallet), but the rhetoric they maintain suggests that they still do not understand how their platforms have profoundly blurred the lines between speech and conduct. Technology reporter, Julia Carrie Wong, in an article for The Guardian published July 2, writes this about Facebook and Charlottesville:

“[Heather] Heyer’s killer has been convicted and sent to prison, but how does Facebook evaluate its role in the event? Does the calculation change at all when you consider just a few weeks before Charlottesville, I sent Facebook a spreadsheet with links to 175 neo-Nazi, white nationalist and neo-Confederate hate groups that were using itsplatform to recruit and organize? And that Facebook had declined to take any action against the vast majority of them until after Heyer’s murder, when it belatedly cleaned house?”

For her efforts as a journalist (remember journalists?), Wong was of course targeted on the same social platforms, weaponized by the same people she had exposed to Facebook. As she very courageously describes …

“The neo-Nazis and white nationalists I had written about published articles with my photograph that described me as a ‘racial molotov cocktail’ with ‘the cunning of the Jew and the meticulous mathematical mind of a Chink’. They encouraged their followers to go after me too, and I received a steady stream of racist vitriol on Twitter, on Facebook and by email. I tried to ignore it as much as I could. I tried not to ruin Thanksgiving. The worst were the messages that referenced my family, or imagined my rape.”

For as long as I have been writing about these issues (since 2011), descriptions of harassment like Wong’s have either elicited an eyerolling mansplanation as to why we should not take these things so seriously, or an insincere empathy that boils down to “That’s the price we pay for free speech.” Bullshit.

As long as the major platforms are being financially pressured to shed toxic material from their sites, they should take the opportunity to drop all the “conflicting values” rhetoric while they’re at it. Nobody asked these constitutional dilettantes to be stewards of the speech right. It was arrogant of them to presume to play the role of public guardians of civil liberties, especially while providing resources to opponents of those same liberties. They run advertising platforms. And they have no reason to equivocate about, or apologize for, taking out the garbage.

Turns Out Money Talks in Silicon Valley

For years, producers of creative content—from individual artists to mass-media corporations—have tried to engage with internet companies (mainly Google) in an effort to stop the facilitation of rampant, unlicensed access to their material. Whether the complaint is millions of unlicensed works on YouTube, or search results leading users to pirate sites, copyright owners are all-too familiar with the dual response We can’t and We shouldn’t. This is shorthand for the internet industry’s standard claim that they can’t effectively police their platforms; and even if they could, they shouldn’t because freedom.

But as reported in January 2017, advertising giant Procter & Gamble issued a warning on behalf of global advertisers who spend a combined $70+ billion on digital, announcing that they were no longer willing to accept can’t and shouldn’t as answers to their key complaints. These were a lack of transparency (i.e. independent audit) in measuring the quality and effectiveness of digital advertising; and an inability to prevent brands from supporting intolerable content. So, terrorist recruiting videos on YouTube brought to you by Colgate just isn’t working for the brand managers anymore.  Yet, strangely, the internet companies and their bevy of think-tankers have not told these advertisers to stop hating the future and change their business models.  (Though I’d like to watch if they did.)

Fast-forward a year and the Wall Street Journal this week reports that Unilever is threatening to substantially reduce its ad buy on Facebook and YouTube if the companies do not more effectively weed out fake news and other divisive content like racism, sexism, and violence. What’s striking about this article is its concluding follow-up report that P&G’s brand officer Mark Pritchard — it was he in 2017, who charged the internet platforms to clean up their act — notes that “progress has been impressive” and that ninety-percent of his demands have been met.

It will come as no surprise to the creative community that, when revenue is at stake, the major internet companies suddenly discover that it is both technically possible and ideologically conceivable to police their platforms a bit more aggressively than they have to date. Artists and creators should follow these developments because the political, social, and financial pressure being exerted on the platform providers can make the companies more vulnerable to potential liability for infringing creative works; and this might make them a bit more cooperative about solving the “unsolvable” issue of mass infringement. By demonstrating a capacity for control (because now they have to), this underscores what should be obvious to most people — that the tradition of shrugging off the interests of rights holders has been a business decision. Period.

No doubt, many “digital rights” activists will prophesy the end of days for democracy in response to this trend toward platform responsibility; but they can take heart knowing that democracy hasn’t exactly thrived under the principles applied thus far. The assumption that all online interactions are protected speech, and that more speech is the only antidote to harmful speech, is still proving to be a destructive fallacy every second of every day. And it turns out the advertisers, whose money pays for these platforms of democracy, don’t accept that the answer to hate-speech and fake news is to just let it ride until our better angels eventually prevail. It turns out this is both bad for society and bad for business. It turns out money talks in Silicon Valley. And if that’s the only way to get internet companies to behave like citizens instead of bullies, then whatever works.