Schrödinger’s Reality

In January of 2019, I wrote a post asking if, thanks to the internet, we had achieved a state of maximum inescapable bullshit. But whether we were there almost two years ago, we are certainly there now. It took less than a decade for the internet—and, it turns out, mostly Facebook—to destroy American democracy. I know that’s fatalistic, but even if the psychotic would-be monarch lurking inside that corpus we call Trump is no longer president come January, the self-inflicted damage to democratic institutions, conducted in the “marketplace of ideas,” may be irreparable. At least for quite some time.

I would propose that maximum inescapable bullshit derives from two conditions. The first is that, even with the best intentions, we lose almost all context (i.e. accountability) for the inputs that drive much of our political discourse. And the second is that our capacity for discourse itself is overwhelmed by volunteering to be constantly outraged, without a break to process what may not be useful information.

Take the shocking story of Kyle Rittenhouse as an example. The facts, as they are known, indicate that he should be charged with murder and his mother charged as an accessory; and the extent to which any police officers condoned his presence prior to the shooting should be investigated. That ought to be enough for the moment. In a pre-Facebook world, this incident would not be a top story every day, let alone every ten seconds. But on Facebook, I am reminded several times a day by incendiary memes that a Christian group has raised about a hundred-thousand dollars to support Rittenhouse. Now, pause a moment.

The point is not whether those memes refer to a true story. It is certainly a plausible story, and there are undeniably many addle-minded Americans who think of Rittenhouse, now apparently nicknamed the “Kenosha Kid,” as a hero. But also recognize that the meme itself happens to be exactly the kind of post that a professional Russian troll at the Internet Research Agency would generate in the wake of these shootings. At the same time, his buddy sitting next to him will be posting the counter-meme designed to stir up outrage on the other side, as it were. Further, the story could be true and grist for the propaganda mill at the same time.

Take the matter a step further, and try to investigate the claim made in the meme, and where do we begin? With a Google search, naturally. But alas, the first results may or may not be credible. Or perhaps the real story is not exactly what the headlines, or the meme, promised. Can we trust ourselves, or one another, to vet a story that fulfills our confirmation bias? Either way, it’s a lot of damn work when we multiply this example by dozens of stories every hour.

How many of us pause to consider the source of an image with its provocative headline? Was it made by a well-meaning citizen trying to get the word out? A professional troll in St. Petersburg? A fourteen-year-old kid who spends his time on 8Chan and gets his kicks (lulz) pranking the Boomers? Or was it made by domestic provocateurs, who want to incite violence? Answer:  all of the above.

I said that part two of attaining maximum inescapable bullshit is that we volunteer to be constantly outraged, and usually to little or no purpose. Whether based in truth or not, what is the value of chronically engaging with that meme, and a thousand others just like it, every day for weeks on end? Awareness is not increased. Knowledge is not enhanced or refined. Justice is not served any more rapidly or more properly. And for sure, underlying policy issues are not addressed.

It may feel cathartic to click the Angry button or to share the meme with likeminded friends, and some people may even believe they are helping to spread useful and important information. But this is almost never true. All that is being accomplished is self-immolation. We pour gasoline on our own smoldering rage, and the only tangible goal being achieved is that those who truly want to destroy democratic societies put another hashmark on their side of the tally board. That, and Facebook gets to monetize it all.

To reiterate, it does not really matter whether that one story about a group raising money for Rittenhouse is true. It is an example among millions of memes or video clips that have an astounding power to color our perception of events for which we are not present. And this is the same potent force that inspires people like Rittenhouse to do what he did.

Harvard researcher Joan Donovan, in a recent article for MIT Technology Review describes the rise of “riot porn” presently dominating right-wing propaganda, amplifying the narrative, mostly through video clip editing and manipulation, that BLM protestors are a threat to white people everywhere. “With riot porn,” writes Donovan, “what moves someone from watching to showing up is the potential for participating in a violent altercation. The motivating factor is the hope to live out fantasies of taking justice into their own hands …”

We mock QAnon, which, as it turns out, really is the result of Boomers who don’t know how the internet works. But I would remind my wise and learned Xers of the political left, who believe they seek the truth, that they helped soften the ground for the now thriving conspiracy of the “deep state” with their many tweets and shares etc. during the Obama years. All that misguided enthusiasm for leakers and the generalized fear of government surveillance online did not seem to pause to contemplate the future Rittenhouse being radicalized on platforms where we would have been happy to have the FBI watching and possibly able to intervene before he left for Kenosha.

While propaganda of this nature is currently more prominent and effective on the far right—not  least because Trump exploits the narrative—my broader point is that we are all consuming at least sampler plates of “riot porn” or “outrage porn” or however we want to describe it. Tribalism is reinforced and galvanized such that we seem headed for an inevitable clash of Hatfields and McCoys on a national scale. I hope not. But for sure, we have to come to grips with the fact that social media is not only not the solution, it is the problem.

With credit to my eldest for this observation, our reality is now Schrödinger’s Cat: everything on the internet is both true and not true at the same time. We are, of course, witnessing so much extreme conduct in contemporary society that no story is beyond plausibility. But this also means that no story is beyond deniability. The line between conspiracy theory and reality is murky to say the least, and that’s hard enough to track. But we can know for certain that none of the meme-based, click-bait impressions feeding our emotional fires has any accountability whatsoever. Yet we continue to comment and share and to teach the machines and the manipulators how to do an even better job of messing with us next month.

For years, the internet industry and its well-funded network of tech-utopians insisted that these platforms are, at worst, neutral lenses revealing society for what it is, or, at best, improving the world by giving everyone a platform for the “exchange of ideas.” Any criticism that these platforms might be used to severely damage the democratic institutions they were allegedly going to help was met with an impatient eye-roll, a *sigh* at the naïve luddites, resistant to change and innovation. But if it is not clear by now that these platforms are the primary catalysts in democracy’s decline, that alone proves we have achieved maximum inescapable bullshit. 

Social Platforms Discover Neutrality is Not an Option

Social media platforms were practically designed to foster whataboutism. So, we should hardly be surprised that this lazy form of erroneous reasoning dominates so much of our contemporary politics. At least that was one thought that crossed my mind while reading the recent BuzzFeed article describing why so many Facebook employees are lately coming to grips with the kind of harm being done by their platform—the platform they earnestly believed was a force for good.

The headline “Hurting people at scale,” comes from a comment written by software engineer Max Wang, who, upon his departure from the company after seven years, left behind a 24-minute video that reporters Ryan Mac and Craig Silverman describe as a “clear-eyed hammering of Facebook’s leadership and decision-making over the previous year.”

But one comment stuck out for me in the fairly extensive article. It cites Yaël Eisenstat, who formerly led Facebook’s election ads integrity team. She describes a meeting in which it was discussed whether to remove a “conservative” group’s ad that contained material anathema to the platform’s community standards. She is quoted in the article thus:  “But then a policy person chimed in and gave the both-sides argument. They actually wrote something like, ‘There’s bad behavior on both sides.’ And I remember thinking, What does that have to do with anything?”

This too common, tribalist refrain alluding to the “good and bad on both sides” is a sentiment that arguably attained idiot’s nirvana the day in 2017 the current president used those words to compare white supremacists to demonstrators opposing them at Charlottesville. Because, of course, there are not two sides to every story. Until quite recently, in fact, it was not up for debate as to whether the guy carrying the Nazi flag is the bad guy. Everybody does not get a seat at the table.

Except, of course, thanks to social platforms, the table was extended logarithmically so that everyone could have a seat. And for more than a decade, the industry promoted—and the public largely accepted—the premise that this cybernetic largesse would be a fillip to democracy worldwide. Now, as we watch the experiment fail, and platform founders like Twitter’s Jack Dorsey respond by removing even the president’s tweets (if they are considered hazardously misleading or inciting violence), it is easy to believe that this approach to moderation might be too little, too late.

Only in the last several weeks—and only in response to direct pressure from employees or major advertisers—has the leadership at Facebook taken any action to mitigate hatespeech or disinformation on its platform, preferring instead to dig in its heels on the ill-conceived premise that social platforms should strive for neutrality. Never mind that neutrality is not the default setting for Facebook, which manipulates what we see all the time, but neutrality can never be an option for any organization that intends to be a force for good.

“There’s a real culture within Facebook to assume good intent. To me, this was a case where you cannot assume good intent for a symbol that could be Nazi imagery.” Anonymous employee commenting to BuzzFeed regarding the company’s hesitation to remove Trump campaign ads depicting a triangle symbol once used by Nazis to identify prisoners as political enemies of the Reich.

Good is not neutral. Good is a moral or practical judgment that an individual or organization defines. And then, having defined what constitutes good, sides must be chosen. Claiming to be a force for good can never reconcile the kind of adolescent fence-straddling espoused by Mark Zuckerberg when he makes public statements that he is “personally disgusted” by [incitements of violence, hate speech, white supremacy, etc..], but does not believe his platform should be “the arbiter of truth.” That is a statement of economic interest, and nothing more.

The zeal with which internet industry leaders maintained their belief in, or paid lip-service to, operating “neutral” platforms resulted in poor stewardship of their walled gardens. They sold the public on a policy of letting the weeds do their thing on the assumption that the good plants would win out in the end. At least that’s what they said to all of us and to the people they hired. In the C-Suites, though, it is more plausible to assume that its occupants did not (and likely still do not) care one way or another. The market value of Facebook depends on scale and volume of interaction, and an anti-Semitic page can be as valuable as a page dedicated to feeding the homeless. It’s all just data.

Social platforms did not create the “both sides” fallacy, the handmaid of whataboutism. But social platforms were (and are) the petri dish where the virus exploded into a different kind of pandemic, a pandemic of ignorance, incompetence, and a contempt for reason and propriety that infests the highest offices in government. When I watched Rep. Alexandria Ocasio-Cortez’s frank and clear-eyed response to Rep. Ted Yoho’s non-apology for verbally assaulting her on the Capitol steps, it struck me that the story was about more than the chronic sexism the congresswoman addressed. It was the art of the pwn (pone)—this is the gamer/internet culture word for “utter domination of an opponent,” often by insult alone—superseding the value of political debate.

After all, it was clear from Rep. Yoho’s floor statement that his conduct was not a lapse or an aberration. His inscrutable testimony that he could not apologize for his “passion or for loving my God, my family and my country” demonstrated that he believes he was fundamentally right when he called a colleague a “fuckng bitch” on the Capitol steps in front of reporters. And he surely knows that this conduct is exactly the kind of politics an increasingly self-righteous electorate wants to see now—a politics where even Congress mirrors the worst aspects of social media, and where unconscionable behavior will be rationalized by the fallacy of whataboutism

That incident, more than just a dramatic side show to be washed away by the news cycle, is just one example of some very real battle lines being drawn in a fight for the soul of the United States today. The lines are not fuzzy, and neutrality is not an option. If next month, a Democratic congressman accosts a Republican Member in the same manner, he will be wrong. Period. When the president’s son tweets a COVID-related video of a (I guess we’ll call her a witch doctor?), who is known to have described a correlation between demon rape and medical conditions, and Twitter sanctions Don Jr.’s account, that is not “silencing conservative voices.”

There are not two sides to every story. So, it is good to read that many of Facebook’s employees have finally arrived, albeit late, at this conclusion. Though I would have thought that, of all people, computer engineers would have been among the first to recognize when something is binary.


Photo source: njnightsky

Is It Finally Time to Boycott Facebook?

It is impossible to look at the landscape of America, at this burning city on a hill, and not weep. Or scream.

Because this blog advocates the legal rights of creators (copyrights), and because those rights historically enjoy bipartisan support, I have tried to maintain a politically balanced tone when writing about most policy matters. That was a lot easier before Donald Trump became President. It is not my fault the Republican party is presently stuck with a leader about whom the kindest thing one can say is that he’s a moron. That’s a problem real conservatives and Republicans are going to have to work out for themselves. And if they don’t, these fires are not going to be extinguished for a very long time.

With regard to the broader editorial focus of this blog—the one that questions the value of the digital-age experiment and the industry behind it—it is now impossible to discuss that topic without placing Trump, and his supporters, squarely in the column of an unqualified evil—an enemy of humanity and republican democracy. Not that anyone would accuse me of being particularly kind about Trump in other posts, but today, there is a more acute question that needs to be asked:  if we want to end this dystopian circus of an administration, would it help to boycott Facebook? 

Ever since the 2016 election and revelations of data manipulation and fake news, we have been inundated by editorials opining as to what social media platforms should or should not do about various forms of toxic content on their sites. The utopian narrative that “all content is speech, and platforms owe a duty to the speech right” has been cracking under the weight of its own folly for three years, and it finally snapped last week when Twitter and Facebook took divergent paths on the matter of fact-checking the President.  

Apropos Trump’s largely-theatrical spat with Twitter and the toothless Executive Order he signed on Thursday, scholar Zeynep Tufeckci, writing for The Atlantic, expounds on some of the reasons why Trump really has no intention of tightening the legislative screws on Silicon Valley—even if he could. In particular, Tufekci notes the symbiosis that exists between Trump and Facebook …

The relationship is so smooth that Trump said Zuckerberg congratulated the president for being ‘No. 1 on Facebook‘ at a private dinner with him. Bloomberg has reported that Facebook’s own data-science team agreed, publishing an internal report concluding how much better Trump was in leveraging ‘Facebook’s ability to optimize for outcomes.’ This isn’t an unusual move for Facebook and its clients. Bloomberg has reported that Facebook also offered its ‘white glove‘ services to the Philippine strongman Rodrigo Duterte, to help him ‘maximize the platform’s potential and use best practices.”’

When Zuckerberg appeared on Fox News and criticized Twitter for fact-checking a handful of Trump’s tweets, most of the response I saw was well-earned mockery. I shared the meme that said “Mark Zuckerberg—Dead At 36—Says Social Media Sites Should Not Fact Check Posts.” I mean, that’s pretty funny.

All sneering aside, though, Zuckerberg’s statement on Fox only repeated the same rhetoric that has been nodded at for years by internet users across the political spectrum—all buying the bullshit that these platforms make democracy work better. “I just believe strongly that Facebook shouldn’t be the arbiter of truth of everything that people say online,” Zuckerberg said. And that is not news. It’s the same gibberish that Big Tech, the EFF, the ACLU, PublicKnowledge, Techdirt, and every other techno-utopian voice has been repeating for more than a decade. 

It is ultimately necessary that people understand why Zuckerberg’s position is misguided outside the context of fact-checking the most dangerous president in modern history. But in the meantime, if the goal is to stymie Trump’s assault on America, then one thing we could do is to stop giving Zuckerberg so much of our time and data for free. Every post, especially every substantive post, feeds the data machine that, according to Tufekci’s statement above, team Trump happens to be so good at leveraging. And for which team Facebook is apparently congratulating them. Further, Tufekci tells us …

“In 2016, Facebook’s own internal research team found that ’64% of all extremist group joins are due to our recommendation tools’ and, if left unchecked, Facebook would feed users ‘more and more divisive content in an effort to gain user attention and increase time on the platform.’ The same research team also found that fake news, spam, clickbait, and inauthentic users inevitably included ‘a larger infrastructure of accounts and publishers on the far right than on the far left.’”

So what do we do with this information? Because the data seem to suggest that Americans who want to disarm Trump—and that happens to be most Americans—should in fact deny Facebook their voluntary input. Far more meaningful than refusing to patronize a business because one does not like the CEO’s politics, if the lion’s share of Americans simply bailed on Facebook, that would seriously mess up Zuckerberg’s game and, by extension, Trump’s game. We could just MySpace that shit. But can we?

I know. It’s like we’re all teenagers again (okay, in my generation) talking to that girlfriend or boyfriend, saying, “No, you hang up first.” It’s why Zuckerberg really doesn’t care if we call him a smug pinhead on his own platform. As long as we don’t leave, he’s laughing all the way to a very large bank. Our real friends and family are on Facebook. It’s the only way some of us keep in touch at all, even without the restrictions imposed by a pandemic. So, unless we all say “One, two, three, go,” and hang up simultaneously, it ain’t gonna happen. One friend over the weekend posted a simple statement that seems to sum up how many are lately feeling …

“It’s a tough question. My friends are all here and I use it to keep track of photos and promote [my work]. But yes evil and destroying our culture so … ???”

Evil and destroying our culture. Who would hesitate to abandon such a service? And how distinct is that sentiment from Facebook’s original tech-bro imperative motto, Move fast and break things.? Anyone who reads this blog knows that I believe social media does more harm than good for democratic societies. In between the connections and the celebrations, it is almost impossible to avoid feeding on a steady diet of outrageous content—much of which is not only untrue but has been purposely crafted by professional trolls working to exacerbate division and hate.

Add to this mix the real racists, anti-semites misogynists, and accelerationists—and a president who unrepentantly throws fuel on all those fires—and we need to understand that there is no way for the rest of us to entirely avoid feeding the riot as long as we remain part of the data set. Twitter may be the medium we think of as Trump’s favorite propaganda toy, but it looks like Facebook is the most powerful weapon in his arsenal. And like it or not, we are all providing the ammo.

On the other hand, the point of a boycott (even if it were possible) is not necessarily to shut down a business, but to force it to change its practices. And that’s the larger question—not whether we need to leave Facebook per se, but to ask what kind of cultural and policy changes are necessary in order to maximize the positive effects of social platforms and minimize the harm they cause. The techno-utopian faith that the good will overwhelm the bad (i.e. the wisdom of crowds) has proven false. A minority of bad actors online, like a few bad cops or a few violent protestors, can inflict permanent damage. And the challenges presented are systemic—cultural, legal, and economic. 

The folly of Trump’s Executive Order, oddly enough, points to the first step:  recognizing what the EO does not—that social platforms are not defenders of the speech right, and that the speech right itself has been grotesquely distorted thanks, in large part, to social platforms. If we can begin with the premise that not everything posted to the internet is protected speech–and that even if it is protected speech, platforms have no obligation to support it–we might be able to recognize that the plan for better social platform governance is not so novel as the industry tries to make it seem. The developers ebulliently call their spaces “communities” but have thus far rarely looked to community for guidance. 

It may be arduous in practice to weed out the hate mongers and provocateurs, but it is not so complicated in principle as Silicon Valley and its PR machine have made it sound. Facebook is no more obligated to host a white supremacist page than my local cafe is to put a KKK poster in its window. Communities say No to bad actors all the time. Facebook, Twitter, Reddit, et al can do the same thing, and it is long past the moment when they should stop wringing their hands each time they finally make a moral decision. Like when Cloudflare dropped The Daily Stormer in 2017, and one of its team members wondered if that was “the day the internet dies.” Time to grow up. 

It is a tragic reality for the nation that far too much material that fits the descriptions misleading, violence-inciting, hate-mongering, and harassing has been mislabeled “conservative” because the President uses social media to amplify that kind of content. Consequently, I get why Facebook feels it has a Trump problem, but that’s tough shit for Zuckerberg. We all have a Trump problem. He is a moral hazard. A berserker in a nation trying to hold civilization together with its bare hands. And Zuckerberg’s alleged neutrality does not make him a principled actor. It makes him an arms dealer profiting from both sides of a war.