Many years ago while still in college, I was on the train to New York City—a beautiful ride along the eastern banks of the Hudson River. Several rows from me sat a family of American tourists who caught my attention when I heard the dad say, “Look kids, there’s Alcatraz.”
Reasonably confident that Alcatraz sits on an island in San Francisco Bay, I glanced over to see the man pointing across the river and his two children gazing at the fortress of the Military Academy at West Point. The layers of incorrectness in this guy’s armchair tour-guiding is more or less the kind of “information age” social media has amplified at an unprecedented scale. And I remain unconvinced that there is a policy, either public or private, that can do much about it.
In her June 13 article on FastCompany, former Google VP of Communications Jessica Powell recommends a behavioral and cultural shift whereby those who work for Silicon Valley join, rather than scorn, the conversation about regulatory and practice changes in her industry …
… we tend to close ranks when our industry is criticized. We view an attack on Facebook’s handling of content moderation, for example, as something that might threaten all the legal protections given to platforms—and if we’re Twitter, YouTube, Reddit, or any other social platform, we tend to go silent. Rather than providing a more nuanced critique about what Facebook may be doing wrong (or right), we attack the outsiders as Luddites who want social media apps to remove all speech we don’t like or demand that platforms like Facebook should hire a million humans to moderate controversial content.
Kudos to Powell for identifying the habits of her colleagues and for proposing the very reasonable notion that “Some of the best ideas about how to sensibly regulate tech can probably be found in the Valley….” Fair enough. And by all means, cooperation would be a refreshing change coming from that industry. But what if it is necessary to flip the narrative on the nature of what we’re really talking about?
Powell does not acknowledge in her article the Olympian hubris with which Silicon Valley has proclaimed its innovations to be so universally beneficial for society that the critics should shut up and thank them for their largesse. Remember that all proposals, even those seeking to mitigate new forms of tech-enabled crime, have generally been rebuffed with some variation on the caveat that we must avoid stifling the greatest tool for democracy ever created.
That premise continues to distort the nature of the conversation, and it is probably false. Rather than assuming a platform like Facebook is a positive social force with a few negative effects that need mitigating, it may actually be the case that it is a negative social force with a few nice qualities. We can wish one another Happy Birthday, keep up with our friends, and even have some very substantive discussions; but what if social media as an information source is fundamentally toxic to democratic institutions and we have to address it in those terms? That would be a very different conversation from the one being had right now, and I cannot imagine “the Valley” Powell describes would be eager to table the premise that much of what they do is, on the whole, destructive.
In a blog post for Luminate, an organization that funds and supports efforts to improve democratic institutions around the world, David Madden writes about tackling Digital Threats to Democracy …
“Over the last nine months, three of the world’s biggest countries have held elections: Brazil, Nigeria, and Indonesia….Social media posed a clear threat to the elections of all three countries.
“…a video on Facebook three days before the polls suggested that the [Brazilian]Workers’ Party Candidate Fernando Haddad was planning to distribute ‘gay kits’ to child care centers.
… a rumor that [Nigerian]President Buhari was dead and that a body double was serving in his place. This rumor was so widely shared that President Buhari had to publicly deny that he had been ‘cloned’.
Online “hoaxes” increased dramatically during the [Indonesian]campaign and the deadly violence that followed the announcement of the election result was the natural conclusion of the incendiary battle fought on social media and WhatsApp.”
These, and many stories like them, are the reason the conversation is finally being had about platform moderation and/or regulation. Congress just last week held hearings on the subject of “deepfakes” because it is clearly the next technological innovation about to be weaponized and aimed at democratic institutions. But this kind of purposeful disinformation, as devastating as it has proven to be, may be more easily mitigated than the ordinary, subtle effect the new “commons” has in steadily eroding the delicate fabric that holds liberal democracies together.
For instance, because it’s in my wheelhouse, I’ll note a recent blog post published by my friends at Creative Future on the topic that Google has funded academics who just happen to espouse anti-copyright views. When I scrolled by their post on Facebook yesterday morning, there were 260 comments, so I took a peek. I know. Never read the comments. But the problem with that rule of thumb is that the comments are us. Bots and trolls notwithstanding, they are an anthology of what we think and why we think it, except that we are perhaps just egomaniacal enough that we like to believe the peanut galleryis everybody else.
Just in response to this one blog post, commenters unpacked their views on liberals, conservatives, capitalism, socialism, climate change, and academia overall, plus at least one reference to Nazis and, of course, one guy reciting Scripture. It’s like a Richard Scary book illustrating Crazytown, where the village hosts a public forum on one topic, and a literal food fight would be a step forward in thoughtful discourse. Whether in agreement or not with a given post—even just straight reportage—the subject is too-often subsumed by other matters about which the commenters seem equally uninformed. Look kids, there’s Alcatraz!
We are all ignorant about a great many things. Even the most gifted astrophysicist who knows way more than you and I about the cosmos is still searching for what she does not know. But with regard to the kind of informed public that is understood to be essential for the survival of a democratic society, the capacity of social media to amplify misinformation is not only unprecedented, but it is not limited the most obvious forms of chicanery. The effects are subtle and mundane. The simple act of typing and publishing a misinformed comment more deeply etches a false narrative into one’s world view. Multiply this phenomenon across every story on every topic, and it is little surprise that democratic institutions are in dire straits.
As others have noted, one of the greatest hazards posed by “deepfakes” technology is the prospect of universal plausible deniability—the opportunity for anyone to claim that video evidence of them saying or doing something is fake when it is not. Anticipating that environment feels as though we are standing on the edge of an event horizon different from the technological singularity predicted to occur when the machines become self-aware. In this scenario, the singularity is caused by the paradox of infinite doubt—a gravitational force from which reliable information cannot escape because there is no longer sufficient consensus as to what a reliable source looks like.
That may be needlessly pessimistic, but to the extent that we already see evidence of this phenomenon having tangible and devastating effects, social media must be recognized as an underlying cause of the problem, which means that it is unlikely be its own antidote. Certainly not without a very different conversation that begins with Jessica Powell’s friends and colleagues dropping their making the world better rhetoric. Because it seems abundantly clear that they are doing no such thing.