On Fixing Social Media: Why Fear Unintended Consequences?

In an excellent post on the blog Librarian Shipwreck, the author reminds us to take a more expansive view of the so-called Facebook problem. The article lands direct hits on most of the big nails (for instance, that we cannot trust Facebook to fix Facebook), but perhaps its most critical observation is the one about a difficult conversation we are not having at all.

As mentioned in my recent post, it is hard to imagine that Congress will not soon adopt legislation prohibiting social platform practices which are believed to directly aggravate health hazards among teens and tweens. That’s where the “Big Tobacco” analogy holds up, but also (I suspect) where it ends. Mitigating specific dangers, like algorithms that foster platform addiction or removing disinformation and conspiracy peddlers, is all necessary, but also low-hanging fruit on the edges of a dense, untamed grove into which few of us wish to venture. As Librarian puts it:

Too often it seems that we are singling out companies like Facebook for invective so that we don’t actually have to talk about our society’s reliance on computers and the Internet. Thus, Facebook gets held up as the scoundrel that is responsible for quashing the utopian potential of computers and the Internet—a potential that will be surely redeemed by the arrival of Web3. Yet the fantasies about Web3 sound very similar to the fantasies that originally surrounded Web 2.0 which in turn sounded a heck of a lot like the fantasies that had surrounded the original Web which in turn sounded a heck of a lot like the fantasies that were first spun out about personal computers which in turn sounded a heck of a lot like the fantasies that were first spun out about computers. The danger here is that we are vilifying Facebook (villain though it surely is), to save us from having to think more deeply about computers and the Internet.

If I may be so rude as to compress that:  Librarian makes the unimpeachable argument that Bullshit 3.0 is just a faster version of Bullshit 2.0. The bullshit in this case is the belief that the internet is, or ever was, something transcendent. Because at the same time that Barlow was scribbling the hubristic Declaration of the Independence of Cyberspace, money—a lot of money—was changing hands on the promise that somehow, someday, networked computers would be a more efficient way to sell soap. 90s-era conversations about targeted advertising asked whether consumers would tolerate the privacy invasions necessary to achieve those aims, and eventually, Google and Facebook proved that our transition into that brave new world could be almost frictionless.

The dream of an internet that operated ethically, yet beyond the laws of “weary nations”—a dream the utopians lament as having died sometime in the last several years—was never alive in the first place. That supposed goldilocks period, often referred to as the wild west, was not a brief glimpse of the web as it was meant to be, but an interlude of disarray and experimentation on the backend, while a whole generation played the role of lab mice on the frontend. And, sure, it seemed idyllic; the digital natives were all children.

It turned out that we were not very resistant to the internet crawling into our private lives while teaching the machines to “know us better than we know ourselves,” as former Google chairman Eric Schmidt liked to say. And arguably, we crossed that threshold so easily for two main reasons:  1) because the features and conveniences these companies provided were initially cool and then indispensable; and 2) because we did not believe, or even imagine, how hazardous the bargain would be.

It is an understatement to say that we are currently brimming with proposals to “fix” social media—especially Facebook—and that overstuffed suggestion box naturally provokes the industry lobbyists and “digital rights” groups to rally in defense of the status quo and to warn against “unintended consequences” that could result from one mandate or another. But this fearful narrative is predicated on the assumption that the status quo is acceptable, if not very good. On the contrary, social media’s CV comprises a dark litany of unintended consequences with virtually no oversight of the people running the experiment. And the items in bold on that list are nothing short of disastrous.

Who really anticipated that when we started connecting with old friends and sharing snapshots, that we were feeding data into a machine that could, and would, be used to foment a genocide in Asia or animate enough conspiracy theory to rattle the foundations of liberal democracy worldwide? Every problem caused by social media is an unintended consequence. At least it better be. As whistleblower Frances Haugen opined in her testimony on Capitol Hill, “I don’t think at any point Facebook set out to make a destructive platform.”

That’s probably true. So, if the toxic results of social media are unintended, let’s not be too timid about whatever new unintended consequences may result from efforts to address those problems. To Librarian’s point, we should instead step back, rewrite the premise, and have that “deeper conversation about computers and the internet” by rejecting the belabored lexicon of superlatives used to describe cyber life as something approaching the spiritual. It isn’t. It never was. And as a putative catalyst to “make democracy work better,” it’s a total bust. But to be fair, it is a pretty sophisticated way to sell soap.


Photo by: evgenyyjamart

On the Post Hoc Deplatforming of Trump

I guess this is the digital-age equivalent of defenestration:  rather than an authoritarian getting thrown out a window, he gets thrown off Twitter. And now that the major platforms have closed the proverbial barn door while the cows run amok on Pennsylvania Avenue, calling the decision to deplatform Trump too little too late is itself saying far too little, and way too late.

On December 31, 2016, I published a post asking whether Americans might begin to doubt the extravagant premise that the internet as we know it is a gift to democracy. To an extent, the answer to that question was yes. Over the past four years, we did see at least a new willingness to criticize Silicon Valley; and at the same time, that industry’s ability to thwart every policy initiative with the over-broad message that “the internet would break” proved as futile as it is fallacious. 

That it took a violent, seditious* assault on the Capitol to slap at least some of Trump’s enablers into reality is dismaying to say the least, and many of those enablers should not—and very possibly will not—be forgiven. But we should also not be quick to absolve the corporate enablers at Twitter, Facebook, et al, or their well-financed network of shills who so earnestly promoted the notions that all content online is tantamount to protected speech, that the free exchange of all views is inherently a net positive, and that the good will outweigh the bad as long as we remove all barriers to informative and cultural material.

Long before Trump announced his candidacy, the political landscape had been well-softened by the illusion that social platforms provide better transparency, and Trump’s incipient cult was not unique in believing that “new media” were providing access to a truth that the gatekeepers of the “old media” were hiding. At the same time, social platforms are uniquely designed to feed that egotist in us that craves the dopamine hit generally referred to as confirmation bias.  

The tech-utopians truly believed (and apparently still do) that a more enlightened, more civilized world is the inexorable outcome of more access to more information. When some of us countered that internet platforms seem to be highly effective at spreading disinformation and other toxic content, we were called luddites who hate progress and technology. We were told that we wanted to stop a new enlightenment in which “the whole store of human knowledge would be at everyone’s fingertips.”

It should not have been so easy for a president, or any individual, to insinuate that the entire intel community is a corrupt “deep state” or that election officials are liars or that over 60 courts, including the Supreme Court, willfully ignored fraud in the 2020 election. Those conclusions insist that not one of the tens of thousands of oath-taking public servants implicated can be trusted over the word of one man or the conspiratorial ravings of some profiteering opportunists on the internet.

We must acknowledge that Facebook, Twitter, Google, Amazon, Reddit et al have been the category killers in the business of that profiteering opportunism. If one feels suddenly inclined to straighten out a Trump defender on the First Amendment, remember that it was these corporations, with the assistance of the EFF, Techdirt, Public Knowledge, the ACLU and others, all asserting for many years that almost everything posted online should be treated with the deference of protected speech. Whether militance on this matter is ideological or simple greed, it is a premise that must be rejected as false for our own good. David Golumbia, associate professor of digital studies, wrote recently for the Boston Globe:

As a small group of scholars and activists are arguing with increasing force,…it is manifestly possible to protect free speech — and thus enhance the political and democratic values free speech is meant to promote — while suppressing, or at least not actively encouraging, the efforts of those who want to turn democracies against themselves.

And if we grasp that protections on speech really exist to enhance democratic participation, then it’s easier to see through the claims that digital products such as Bitcoin or Apple’s computer code count as speech. In other words, we’d see that a lot of cries for “freedom of speech” in the Internet era are really just demands for freedom from regulations that wouldn’t be challenged in the offline world.

So, by all means, Senators Hawley and Cruz, and any elected official who lent credence to the stolen election story, should be held accountable for feeding a fire that exploded on January 6,and is probably not done exploding. But Big Tech executives and the “digital rights” groups have much to answer for as well. To a very great extent, Donald Trump merely exploited the systemic and psychological vulnerabilities that the major platforms had been exacerbating and monetizing for years.

The leaders of the internet industry have consistently spoken to the public in the ebullient language of new horizons, where fresh ideas and opportunities converge. But that was only part of the picture. While raking in billions, these companies willfully ignored or scornfully dismissed the fact that their systems and business models made few distinctions among information, misinformation, and disinformation. Instead, they papered over those dichotomies by citing the First Amendment to which they owed no duty whatsoever. So, yes, Trump and his supporters are dead wrong to call the sudden deplatforming an infringement of the speech right, but it was the internet companies themselves who fed them that lie in the first place.


*CORRECTION: This was originally published as “treasonous,” which is the wrong word.

Schrödinger’s Reality

In January of 2019, I wrote a post asking if, thanks to the internet, we had achieved a state of maximum inescapable bullshit. But whether we were there almost two years ago, we are certainly there now. It took less than a decade for the internet—and, it turns out, mostly Facebook—to destroy American democracy. I know that’s fatalistic, but even if the psychotic would-be monarch lurking inside that corpus we call Trump is no longer president come January, the self-inflicted damage to democratic institutions, conducted in the “marketplace of ideas,” may be irreparable. At least for quite some time.

I would propose that maximum inescapable bullshit derives from two conditions. The first is that, even with the best intentions, we lose almost all context (i.e. accountability) for the inputs that drive much of our political discourse. And the second is that our capacity for discourse itself is overwhelmed by volunteering to be constantly outraged, without a break to process what may not be useful information.

Take the shocking story of Kyle Rittenhouse as an example. The facts, as they are known, indicate that he should be charged with murder and his mother charged as an accessory; and the extent to which any police officers condoned his presence prior to the shooting should be investigated. That ought to be enough for the moment. In a pre-Facebook world, this incident would not be a top story every day, let alone every ten seconds. But on Facebook, I am reminded several times a day by incendiary memes that a Christian group has raised about a hundred-thousand dollars to support Rittenhouse. Now, pause a moment.

The point is not whether those memes refer to a true story. It is certainly a plausible story, and there are undeniably many addle-minded Americans who think of Rittenhouse, now apparently nicknamed the “Kenosha Kid,” as a hero. But also recognize that the meme itself happens to be exactly the kind of post that a professional Russian troll at the Internet Research Agency would generate in the wake of these shootings. At the same time, his buddy sitting next to him will be posting the counter-meme designed to stir up outrage on the other side, as it were. Further, the story could be true and grist for the propaganda mill at the same time.

Take the matter a step further, and try to investigate the claim made in the meme, and where do we begin? With a Google search, naturally. But alas, the first results may or may not be credible. Or perhaps the real story is not exactly what the headlines, or the meme, promised. Can we trust ourselves, or one another, to vet a story that fulfills our confirmation bias? Either way, it’s a lot of damn work when we multiply this example by dozens of stories every hour.

How many of us pause to consider the source of an image with its provocative headline? Was it made by a well-meaning citizen trying to get the word out? A professional troll in St. Petersburg? A fourteen-year-old kid who spends his time on 8Chan and gets his kicks (lulz) pranking the Boomers? Or was it made by domestic provocateurs, who want to incite violence? Answer:  all of the above.

I said that part two of attaining maximum inescapable bullshit is that we volunteer to be constantly outraged, and usually to little or no purpose. Whether based in truth or not, what is the value of chronically engaging with that meme, and a thousand others just like it, every day for weeks on end? Awareness is not increased. Knowledge is not enhanced or refined. Justice is not served any more rapidly or more properly. And for sure, underlying policy issues are not addressed.

It may feel cathartic to click the Angry button or to share the meme with likeminded friends, and some people may even believe they are helping to spread useful and important information. But this is almost never true. All that is being accomplished is self-immolation. We pour gasoline on our own smoldering rage, and the only tangible goal being achieved is that those who truly want to destroy democratic societies put another hashmark on their side of the tally board. That, and Facebook gets to monetize it all.

To reiterate, it does not really matter whether that one story about a group raising money for Rittenhouse is true. It is an example among millions of memes or video clips that have an astounding power to color our perception of events for which we are not present. And this is the same potent force that inspires people like Rittenhouse to do what he did.

Harvard researcher Joan Donovan, in a recent article for MIT Technology Review describes the rise of “riot porn” presently dominating right-wing propaganda, amplifying the narrative, mostly through video clip editing and manipulation, that BLM protestors are a threat to white people everywhere. “With riot porn,” writes Donovan, “what moves someone from watching to showing up is the potential for participating in a violent altercation. The motivating factor is the hope to live out fantasies of taking justice into their own hands …”

We mock QAnon, which, as it turns out, really is the result of Boomers who don’t know how the internet works. But I would remind my wise and learned Xers of the political left, who believe they seek the truth, that they helped soften the ground for the now thriving conspiracy of the “deep state” with their many tweets and shares etc. during the Obama years. All that misguided enthusiasm for leakers and the generalized fear of government surveillance online did not seem to pause to contemplate the future Rittenhouse being radicalized on platforms where we would have been happy to have the FBI watching and possibly able to intervene before he left for Kenosha.

While propaganda of this nature is currently more prominent and effective on the far right—not  least because Trump exploits the narrative—my broader point is that we are all consuming at least sampler plates of “riot porn” or “outrage porn” or however we want to describe it. Tribalism is reinforced and galvanized such that we seem headed for an inevitable clash of Hatfields and McCoys on a national scale. I hope not. But for sure, we have to come to grips with the fact that social media is not only not the solution, it is the problem.

With credit to my eldest for this observation, our reality is now Schrödinger’s Cat: everything on the internet is both true and not true at the same time. We are, of course, witnessing so much extreme conduct in contemporary society that no story is beyond plausibility. But this also means that no story is beyond deniability. The line between conspiracy theory and reality is murky to say the least, and that’s hard enough to track. But we can know for certain that none of the meme-based, click-bait impressions feeding our emotional fires has any accountability whatsoever. Yet we continue to comment and share and to teach the machines and the manipulators how to do an even better job of messing with us next month.

For years, the internet industry and its well-funded network of tech-utopians insisted that these platforms are, at worst, neutral lenses revealing society for what it is, or, at best, improving the world by giving everyone a platform for the “exchange of ideas.” Any criticism that these platforms might be used to severely damage the democratic institutions they were allegedly going to help was met with an impatient eye-roll, a *sigh* at the naïve luddites, resistant to change and innovation. But if it is not clear by now that these platforms are the primary catalysts in democracy’s decline, that alone proves we have achieved maximum inescapable bullshit.