Schrödinger’s Reality

In January of 2019, I wrote a post asking if, thanks to the internet, we had achieved a state of maximum inescapable bullshit. But whether we were there almost two years ago, we are certainly there now. It took less than a decade for the internet—and, it turns out, mostly Facebook—to destroy American democracy. I know that’s fatalistic, but even if the psychotic would-be monarch lurking inside that corpus we call Trump is no longer president come January, the self-inflicted damage to democratic institutions, conducted in the “marketplace of ideas,” may be irreparable. At least for quite some time.

I would propose that maximum inescapable bullshit derives from two conditions. The first is that, even with the best intentions, we lose almost all context (i.e. accountability) for the inputs that drive much of our political discourse. And the second is that our capacity for discourse itself is overwhelmed by volunteering to be constantly outraged, without a break to process what may not be useful information.

Take the shocking story of Kyle Rittenhouse as an example. The facts, as they are known, indicate that he should be charged with murder and his mother charged as an accessory; and the extent to which any police officers condoned his presence prior to the shooting should be investigated. That ought to be enough for the moment. In a pre-Facebook world, this incident would not be a top story every day, let alone every ten seconds. But on Facebook, I am reminded several times a day by incendiary memes that a Christian group has raised about a hundred-thousand dollars to support Rittenhouse. Now, pause a moment.

The point is not whether those memes refer to a true story. It is certainly a plausible story, and there are undeniably many addle-minded Americans who think of Rittenhouse, now apparently nicknamed the “Kenosha Kid,” as a hero. But also recognize that the meme itself happens to be exactly the kind of post that a professional Russian troll at the Internet Research Agency would generate in the wake of these shootings. At the same time, his buddy sitting next to him will be posting the counter-meme designed to stir up outrage on the other side, as it were. Further, the story could be true and grist for the propaganda mill at the same time.

Take the matter a step further, and try to investigate the claim made in the meme, and where do we begin? With a Google search, naturally. But alas, the first results may or may not be credible. Or perhaps the real story is not exactly what the headlines, or the meme, promised. Can we trust ourselves, or one another, to vet a story that fulfills our confirmation bias? Either way, it’s a lot of damn work when we multiply this example by dozens of stories every hour.

How many of us pause to consider the source of an image with its provocative headline? Was it made by a well-meaning citizen trying to get the word out? A professional troll in St. Petersburg? A fourteen-year-old kid who spends his time on 8Chan and gets his kicks (lulz) pranking the Boomers? Or was it made by domestic provocateurs, who want to incite violence? Answer:  all of the above.

I said that part two of attaining maximum inescapable bullshit is that we volunteer to be constantly outraged, and usually to little or no purpose. Whether based in truth or not, what is the value of chronically engaging with that meme, and a thousand others just like it, every day for weeks on end? Awareness is not increased. Knowledge is not enhanced or refined. Justice is not served any more rapidly or more properly. And for sure, underlying policy issues are not addressed.

It may feel cathartic to click the Angry button or to share the meme with likeminded friends, and some people may even believe they are helping to spread useful and important information. But this is almost never true. All that is being accomplished is self-immolation. We pour gasoline on our own smoldering rage, and the only tangible goal being achieved is that those who truly want to destroy democratic societies put another hashmark on their side of the tally board. That, and Facebook gets to monetize it all.

To reiterate, it does not really matter whether that one story about a group raising money for Rittenhouse is true. It is an example among millions of memes or video clips that have an astounding power to color our perception of events for which we are not present. And this is the same potent force that inspires people like Rittenhouse to do what he did.

Harvard researcher Joan Donovan, in a recent article for MIT Technology Review describes the rise of “riot porn” presently dominating right-wing propaganda, amplifying the narrative, mostly through video clip editing and manipulation, that BLM protestors are a threat to white people everywhere. “With riot porn,” writes Donovan, “what moves someone from watching to showing up is the potential for participating in a violent altercation. The motivating factor is the hope to live out fantasies of taking justice into their own hands …”

We mock QAnon, which, as it turns out, really is the result of Boomers who don’t know how the internet works. But I would remind my wise and learned Xers of the political left, who believe they seek the truth, that they helped soften the ground for the now thriving conspiracy of the “deep state” with their many tweets and shares etc. during the Obama years. All that misguided enthusiasm for leakers and the generalized fear of government surveillance online did not seem to pause to contemplate the future Rittenhouse being radicalized on platforms where we would have been happy to have the FBI watching and possibly able to intervene before he left for Kenosha.

While propaganda of this nature is currently more prominent and effective on the far right—not  least because Trump exploits the narrative—my broader point is that we are all consuming at least sampler plates of “riot porn” or “outrage porn” or however we want to describe it. Tribalism is reinforced and galvanized such that we seem headed for an inevitable clash of Hatfields and McCoys on a national scale. I hope not. But for sure, we have to come to grips with the fact that social media is not only not the solution, it is the problem.

With credit to my eldest for this observation, our reality is now Schrödinger’s Cat: everything on the internet is both true and not true at the same time. We are, of course, witnessing so much extreme conduct in contemporary society that no story is beyond plausibility. But this also means that no story is beyond deniability. The line between conspiracy theory and reality is murky to say the least, and that’s hard enough to track. But we can know for certain that none of the meme-based, click-bait impressions feeding our emotional fires has any accountability whatsoever. Yet we continue to comment and share and to teach the machines and the manipulators how to do an even better job of messing with us next month.

For years, the internet industry and its well-funded network of tech-utopians insisted that these platforms are, at worst, neutral lenses revealing society for what it is, or, at best, improving the world by giving everyone a platform for the “exchange of ideas.” Any criticism that these platforms might be used to severely damage the democratic institutions they were allegedly going to help was met with an impatient eye-roll, a *sigh* at the naïve luddites, resistant to change and innovation. But if it is not clear by now that these platforms are the primary catalysts in democracy’s decline, that alone proves we have achieved maximum inescapable bullshit. 

What Netflix’s ‘The Great Hack’ Gets Right

I’ll tell the story again.  This blog began the day a friend of mine—a very smart one—shared an article on Facebook that was patently untrue.  When I confronted him about this, he responded that he cared more about the “issue” than the veracity of the article.  The double-take triggered by his cognitive dissonance led me to poke around and discover that the false article he had shared was hosted on multiple websites, including The Huffington Post.  This sparked the hypothesis that the unprecedented volume of repetition (a.k.a virality) made possible by the internet breeds dangerous levels of consensus around false narratives.  Hence the name The Illusion of More

That was eight years ago and small potatoes.  Last week, I watched the new documentary The Great Hack, made for Netflix and directed by Karim Amer and Jehane Noujaim.  The film’s focus is Cambridge Analytica and the (now-dissolved) company’s use of social media data to manipulate major political outcomes around the world—including the UK’s Leave.EU campaign and the American presidential election of 2016.  For anyone who somehow missed this general story, the film provides a solid overview of events along with details you might have missed and engaging profiles of the key whistleblowers and investigators who shed light on Cambridge Analytica’s activities.

In an article for The Nation, Micha L. Sifry describes what the film “gets wrong,” namely its strong implication that Cambridge Analytica literally won the election for Donald Trump.  On this one binary question, we could certainly run around the barn ad infinitum.  Those who do not like Trump will be more eager to accept this conclusion while those who support him will remain understandably resistant to any allegation that his presidency is the result of tech-enabled chicanery.  Sifry writes …

“The inference, never blatantly stated but simply conveyed by all the tricks of modern documentary-making—striking digital graphics meant to illustrate how our data leaks into the hands of others, ominous music, and alluring close-ups of  [whistleblower Brittany] Kaiser as she watches the scandal unfold on television—is that Trump won because Cambridge Analytica gave him a secret edge.”

While not a completely unfair criticism of the film, Sifry is guilty of constructing at least a diminutive straw man when he focuses on the legitimacy of Trump’s election rather than the film’s broader and more urgent message—that the democratic process is unequivocally being hacked.  This is what the film gets right, and the point is emphasized by one of its main subjects, Carole Cadwalladr, The Guardian journalist most responsible for investigating the Cambridge Analytica story.  

Cadwalladr has made clear in her articles, talks, and in this film that billionaire ideologues, using “weapons grade” information technology and massive amounts of Facebook user data, sought to fracture the democratic process through calculated disinformation campaigns and, as she states, “It’s not about left or right, Leave or Remain, Trump or not Trump.  It’s about whether it is possible to have a free and fair election ever again.” 

The Great Hack wants to make its viewers care about data rights and the dangers of modern misinformation campaigns, but unfortunately is itself a slick piece of misinformation that plays artfully on the prejudices and misunderstandings rife in its targeted audience,” Sifry states.

Again, this may be a fair criticism of the film itself, but one which Sifry uses to draw an unfair conclusion about its relevance. I personally agree that The Great Hack is often too slick for its purposes.  While it may be a market reality that documentarians often need to employ glossy, theatrical production values (e.g. lively compositing effects) in their films in order to compete for audience attention, Sifry is justified in asserting that the creative choices made by the producers do imbue the film with the tone of propaganda that can dilute the seriousness of its reportage. This is especially unfortunate when the film’s subject matter is propaganda and manipulation itself.  

Consequently, the fair critique that the film is, at times, heavy-handed provides Sifry et al the opportunity to dismiss its main narrative, which is to describe how Facebook, a platform marketed as a means to “connect people,” has been weaponized to drive people apart.  This phenomenon is irrefutable and should not be brushed aside just because the producers got a bit lost in style over substance.  The substance is still there.  The story itself is arguably the greatest conspiracy in the history of modern republics. And it is still happening.

Notably, in order to bolster his criticism of the documentary, Sifry cites evidence that manipulative advertising is only so effective, stating, “When it comes to voters’ decisions about their choice of candidate, most forms of paid political persuasion, including TV ads, online ads, mailers, phone calls, and door-knocking, have no discernible effect in terms of changing people’s minds.”  

That may be true if we are talking about traditional political advertising, especially in a pre-Facebook world; but we are far from that particular Kansas, Toto.  Sifry falls into the same trap many people do by mis-measuring this period using pre-digital-age metrics.  The psychological effects of online “engagement” are nothing like the psychological effects of traditional advertising; and this true even without an intermediary using your personal data to target your personal hot buttons.   

If we go back to the example of my friend sharing a false news article, he did not perceive that material as an advertisement.  He perceived it as information, which just happened to support a rationale for a conclusion (a.k.a. deep story) he had come to believe about the National Defense Authorization Act (NDAA) of 2012.  He was wrong about his underlying complaint, but not alone in his wrongness. Dozens of my friends were sharing the same misinformation about the bill, which was being sloppily reported all over the place; and the mere fact that this apparent consensus kept appearing on everyone’s newsfeed fostered a self-fulfilling prophecy.  But here’s the important common thread, in my view, linking that moment to the present …

The phenomenon that was metastasizing then, and which has come around to bite us now, was the steady erosion of trust in the pillars of democratic society.  If one of the most dangerous aspects of Donald Trump is that he consistently undermines or contradicts the intelligence community, it is noteworthy that many of my Democratic and left-leaning friends were doing exactly the same thing just a few years ago—usually because of some careless bit of fluff they found on the internet, and often because “digital rights” groups like the Electronic Frontier Foundation were sowing just as much distrust in those organizations as the current president does today.  This is not an indictment of the principle oversight, only an observation that living in a paradigm of universal distrust is a vicious cycle from which there is no escape.

Consider the moment we’re in this month.  The FBI says white-supremacist ideology—which just happens to have drawn strength from the techno-libertarian approach to cyber policy—poses a significant and growing threat to domestic security.  So, if one finds it appalling that Tucker Carlson can call the this evidence a hoax in the same breath in which he calls election interference a hoax, it is worth noting that the ground for his brand of bullshit was softened through social media by every user across the political spectrum finding different rationales to dismantle trust in agencies like the FBI.  

After all, it was not very long ago that most of my left-leaning friends were endorsing guys like Julian Assange and Ed Snowden as essential antidotes to the American intelligence apparatus.  This sensibility was also fueled by the steady drumbeat of tech-utopians, who continue to promote the illusion that “the internet” somehow provides us with transparency as an alternative to trusting any experts who might actually know what they’re doing—a folly that is admittedly complicated now that we have an Executive who is eager to undermine expertise in every department.   Consequently, it has been interesting to see that many of the same people who thought Wikileaks was the panacea to conspiracy are now hoping against hope that the men and women in the intelligence community are doing their jobs despite assaults on their integrity coming from their own leadership.

In this context of not knowing whom to trust, Sifry is not entirely unfair to criticize the filmmakers’ apparent infatuation with Brittany Kiaser, the former Obama intern, who became a major Cambridge Analytica executive and then turned whistleblower against her colleagues.  And he is almost certainly justified in saying that, “[Kaiser] is not the first person to pump a small role in [Obama’s] campaign into a career-making calling card; Cambridge Analytica is not the first political technology vendor to made [sic] big, unproven claims about its abilities. But we live in the age of silicon snake oil.”

We do live in an age of silicon snake oil, but that fact alone is one reason Sifry misses the point of The Great Hack as both information and metaphor.  Even if Cambridge Analytica achieved a fraction of what is presented in the documentary, it would still be a major scandal and, perhaps most importantly, demonstrates why the sales-pitch that social media would promote better forms of democracy was the apotheosis of modern snake oil swindles—so beautifully wrapped in its shiny hubris that the hucksters believed it themselves.  And many still do.  

Metaphorically, Sifry is naïve to recommend dismissing Kaiser for her character flaws rather than identifying with her because of them.  After all, to the extent that her personal narrative is accurately portrayed in the documentary, it seems to me that her arc from progressive-minded idealist, to overpaid hack for a technology company doing very bad things, to sobered individual trying to remedy some of what she did wrong, does mirror the broader narrative we have been watching unfold with regard to Silicon Valley over the last two years.  And that is fundamentally what The Great Hack gets right.

Flipping the Narrative on the Effects of Social Media

Many years ago while still in college, I was on the train to New York City—a beautiful ride along the eastern banks of the Hudson River.  Several rows from me sat a family of American tourists who caught my attention when I heard the dad say, “Look kids, there’s Alcatraz.”  

Reasonably confident that Alcatraz sits on an island in San Francisco Bay, I glanced over to see the man pointing across the river and his two children gazing at the fortress of the Military Academy at West Point.  The layers of incorrectness in this guy’s armchair tour-guiding is more or less the kind of “information age” social media has amplified at an unprecedented scale.  And I remain unconvinced that there is a policy, either public or private, that can do much about it. 

In her June 13 article on FastCompany, former Google VP of Communications Jessica Powell recommends a behavioral and cultural shift whereby those who work for Silicon Valley join, rather than scorn, the conversation about regulatory and practice changes in her industry …  

… we tend to close ranks when our industry is criticized. We view an attack on Facebook’s handling of content moderation, for example, as something that might threaten all the legal protections given to platforms—and if we’re Twitter, YouTube, Reddit, or any other social platform, we tend to go silent. Rather than providing a more nuanced critique about what Facebook may be doing wrong (or right), we attack the outsiders as Luddites who want social media apps to remove all speech we don’t like or demand that platforms like Facebook should hire a million humans to moderate controversial content. 

Kudos to Powell for identifying the habits of her colleagues and for proposing the very reasonable notion that “Some of the best ideas about how to sensibly regulate tech can probably be found in the Valley….”  Fair enough.  And by all means, cooperation would be a refreshing change coming from that industry. But what if it is necessary to flip the narrative on the nature of what we’re really talking about?  

Powell does not acknowledge in her article the Olympian hubris with which Silicon Valley has proclaimed its innovations to be so universally beneficial for society that the critics should shut up and thank them for their largesse. Remember that all proposals, even those seeking to mitigate new forms of tech-enabled crime, have generally been rebuffed with some variation on the caveat that we must avoid stifling the greatest tool for democracy ever created.  

That premise continues to distort the nature of the conversation, and it is probably false.  Rather than assuming a platform like Facebook is a positive social force with a few negative effects that need mitigating, it may actually be the case that it is a negative social force with a few nice qualities.  We can wish one another Happy Birthday, keep up with our friends, and even have some very substantive discussions; but what if social media as an information source is fundamentally toxic to democratic institutions and we have to address it in those terms?   That would be a very different conversation from the one being had right now, and I cannot imagine “the Valley” Powell describes would be eager to table the premise that much of what they do is, on the whole, destructive.  

In a blog post for Luminate, an organization that funds and supports efforts to improve democratic institutions around the world, David Madden writes about tackling Digital Threats to Democracy …

“Over the last nine months, three of the world’s biggest countries have held elections: Brazil, Nigeria, and Indonesia….Social media posed a clear threat to the elections of all three countries.

 “…a video on Facebook three days before the polls suggested that the [Brazilian]Workers’ Party Candidate Fernando Haddad was planning to distribute ‘gay kits’ to child care centers. 

… a rumor that [Nigerian]President Buhari was dead and that a body double was serving in his place. This rumor was so widely shared that President Buhari had to publicly deny that he had been ‘cloned’.

Online “hoaxes” increased dramatically during the [Indonesian]campaign and the deadly violence that followed the announcement of the election result was the natural conclusion of the incendiary battle fought on social media and WhatsApp.”

These, and many stories like them, are the reason the conversation is finally being had about platform moderation and/or regulation.  Congress just last week held hearings on the subject of “deepfakes” because it is clearly the next technological innovation about to be weaponized and aimed at democratic institutions.  But this kind of purposeful disinformation, as devastating as it has proven to be, may be more easily mitigated than the ordinary, subtle effect the new “commons” has in steadily eroding the delicate fabric that holds liberal democracies together.  

For instance, because it’s in my wheelhouse, I’ll note a recent blog post published by my friends at Creative Future on the topic that Google has funded academics who just happen to espouse anti-copyright views.  When I scrolled by their post on Facebook yesterday morning, there were 260 comments, so I took a peek.  I know. Never read the comments.  But the problem with that rule of thumb is that the comments are us.  Bots and trolls notwithstanding, they are an anthology of what we think and why we think it, except that we are perhaps just egomaniacal enough that we like to believe the peanut galleryis everybody else.  

Just in response to this one blog post, commenters unpacked their views on liberals, conservatives, capitalism, socialism, climate change, and academia overall, plus at least one reference to Nazis and, of course, one guy reciting Scripture.  It’s like a Richard Scary book illustrating Crazytown, where the village hosts a public forum on one topic, and a literal food fight would be a step forward in thoughtful discourse.  Whether in agreement or not with a given post—even just straight reportage—the subject is too-often subsumed by other matters about which the commenters seem equally uninformed. Look kids, there’s Alcatraz! 

We are all ignorant about a great many things.  Even the most gifted astrophysicist who knows way more than you and I about the cosmos is still searching for what she does not know.  But with regard to the kind of informed public that is understood to be essential for the survival of a democratic society, the capacity of social media to amplify misinformation is not only unprecedented, but it is not limited to the most obvious forms of chicanery.  The effects are subtle and mundane.  The simple act of typing and publishing a misinformed comment more deeply etches a false narrative into one’s world view.  Multiply this phenomenon across every story on every topic, and it is little surprise that democratic institutions are in dire straits. 

As others have noted, one of the greatest hazards posed by “deepfakes” technology is the prospect of  universal plausible deniability—the opportunity for anyone to claim that video evidence of them saying or doing something is fake when it is not.  Anticipating that environment feels as though we are standing on the edge of an event horizon different from the technological singularity predicted to occur when the machines become self-aware.  In this scenario, the singularity is caused by the paradox of infinite doubt—a gravitational force from which reliable information cannot escape because there is no longer sufficient consensus as to what a reliable source looks like.  

That may be needlessly pessimistic, but to the extent that we already see evidence of this phenomenon having tangible and devastating effects, social media must be recognized as an underlying cause of the problem, which means that it is unlikely to be its own antidote. Certainly not without a very different conversation that begins with Jessica Powell’s friends and colleagues dropping their making the world better rhetoric.  Because it seems abundantly clear that they are doing no such thing.