Social Platforms Discover Neutrality is Not an Option

Social media platforms were practically designed to foster whataboutism. So, we should hardly be surprised that this lazy form of erroneous reasoning dominates so much of our contemporary politics. At least that was one thought that crossed my mind while reading the recent BuzzFeed article describing why so many Facebook employees are lately coming to grips with the kind of harm being done by their platform—the platform they earnestly believed was a force for good.

The headline “Hurting people at scale,” comes from a comment written by software engineer Max Wang, who, upon his departure from the company after seven years, left behind a 24-minute video that reporters Ryan Mac and Craig Silverman describe as a “clear-eyed hammering of Facebook’s leadership and decision-making over the previous year.”

But one comment stuck out for me in the fairly extensive article. It cites Yaël Eisenstat, who formerly led Facebook’s election ads integrity team. She describes a meeting in which it was discussed whether to remove a “conservative” group’s ad that contained material anathema to the platform’s community standards. She is quoted in the article thus:  “But then a policy person chimed in and gave the both-sides argument. They actually wrote something like, ‘There’s bad behavior on both sides.’ And I remember thinking, What does that have to do with anything?”

This too common, tribalist refrain alluding to the “good and bad on both sides” is a sentiment that arguably attained idiot’s nirvana the day in 2017 the current president used those words to compare white supremacists to demonstrators opposing them at Charlottesville. Because, of course, there are not two sides to every story. Until quite recently, in fact, it was not up for debate as to whether the guy carrying the Nazi flag is the bad guy. Everybody does not get a seat at the table.

Except, of course, thanks to social platforms, the table was extended logarithmically so that everyone could have a seat. And for more than a decade, the industry promoted—and the public largely accepted—the premise that this cybernetic largesse would be a fillip to democracy worldwide. Now, as we watch the experiment fail, and platform founders like Twitter’s Jack Dorsey respond by removing even the president’s tweets (if they are considered hazardously misleading or inciting violence), it is easy to believe that this approach to moderation might be too little, too late.

Only in the last several weeks—and only in response to direct pressure from employees or major advertisers—has the leadership at Facebook taken any action to mitigate hatespeech or disinformation on its platform, preferring instead to dig in its heels on the ill-conceived premise that social platforms should strive for neutrality. Never mind that neutrality is not the default setting for Facebook, which manipulates what we see all the time, but neutrality can never be an option for any organization that intends to be a force for good.

“There’s a real culture within Facebook to assume good intent. To me, this was a case where you cannot assume good intent for a symbol that could be Nazi imagery.” Anonymous employee commenting to BuzzFeed regarding the company’s hesitation to remove Trump campaign ads depicting a triangle symbol once used by Nazis to identify prisoners as political enemies of the Reich.

Good is not neutral. Good is a moral or practical judgment that an individual or organization defines. And then, having defined what constitutes good, sides must be chosen. Claiming to be a force for good can never reconcile the kind of adolescent fence-straddling espoused by Mark Zuckerberg when he makes public statements that he is “personally disgusted” by [incitements of violence, hate speech, white supremacy, etc..], but does not believe his platform should be “the arbiter of truth.” That is a statement of economic interest, and nothing more.

The zeal with which internet industry leaders maintained their belief in, or paid lip-service to, operating “neutral” platforms resulted in poor stewardship of their walled gardens. They sold the public on a policy of letting the weeds do their thing on the assumption that the good plants would win out in the end. At least that’s what they said to all of us and to the people they hired. In the C-Suites, though, it is more plausible to assume that its occupants did not (and likely still do not) care one way or another. The market value of Facebook depends on scale and volume of interaction, and an anti-Semitic page can be as valuable as a page dedicated to feeding the homeless. It’s all just data.

Social platforms did not create the “both sides” fallacy, the handmaid of whataboutism. But social platforms were (and are) the petri dish where the virus exploded into a different kind of pandemic, a pandemic of ignorance, incompetence, and a contempt for reason and propriety that infests the highest offices in government. When I watched Rep. Alexandria Ocasio-Cortez’s frank and clear-eyed response to Rep. Ted Yoho’s non-apology for verbally assaulting her on the Capitol steps, it struck me that the story was about more than the chronic sexism the congresswoman addressed. It was the art of the pwn (pone)—this is the gamer/internet culture word for “utter domination of an opponent,” often by insult alone—superseding the value of political debate.

After all, it was clear from Rep. Yoho’s floor statement that his conduct was not a lapse or an aberration. His inscrutable testimony that he could not apologize for his “passion or for loving my God, my family and my country” demonstrated that he believes he was fundamentally right when he called a colleague a “fuckng bitch” on the Capitol steps in front of reporters. And he surely knows that this conduct is exactly the kind of politics an increasingly self-righteous electorate wants to see now—a politics where even Congress mirrors the worst aspects of social media, and where unconscionable behavior will be rationalized by the fallacy of whataboutism

That incident, more than just a dramatic side show to be washed away by the news cycle, is just one example of some very real battle lines being drawn in a fight for the soul of the United States today. The lines are not fuzzy, and neutrality is not an option. If next month, a Democratic congressman accosts a Republican Member in the same manner, he will be wrong. Period. When the president’s son tweets a COVID-related video of a (I guess we’ll call her a witch doctor?), who is known to have described a correlation between demon rape and medical conditions, and Twitter sanctions Don Jr.’s account, that is not “silencing conservative voices.”

There are not two sides to every story. So, it is good to read that many of Facebook’s employees have finally arrived, albeit late, at this conclusion. Though I would have thought that, of all people, computer engineers would have been among the first to recognize when something is binary.


Photo source: njnightsky

Social (Media) Distancing

Between the headline and the Share button.

Access to credible, useful information could not be more essential than it is in the present moment. But as we are all presumably more attentive than ever to our social media feeds, we are correspondingly bombarded with more garbage content. This crisis is a perfect opportunity for trolls to ply their trade. Whether it’s idiots having a laugh, professional mischief-makers working for foreign agencies, or any number of vested interests, there is no shortage of intentionally misleading material online. But that may not be the greatest concern.

Unfortunately, the expansion of the news market—from the earliest days of cable TV to the breadth of Facebook’s role as a virtual newsstand—has forced even venerable sources to take a more slapdash approach to their reporting. In order to remain relevant (i.e. extant), organizations with distinguished pedigrees are chronically guilty of publishing stories designed to grab, terrify, and outrage more than they are to inform or promote thoughtful dialogue. Almost worse than that, even if the reportage is soundly crafted, the headlines are too often screaming at us because they are designed to promote (mostly negative) social media interaction. And far too many of us are guilty of reacting to and/or sharing only the headlines, where the distinctions between accurate and inflammatory can be rather subtle.

For instance, while acknowledging that we are justified in distrusting Attorney General William Barr on the grounds that he shows little respect for constitutional principles, let’s compare two headlines in which the Rolling Stone follows up on a story first reported by Politico

Politico: DOJ seeks new emergency powers amid coronavirus pandemic. 

Rolling Stone: DOJ Wants to Suspend Constitutional Rights During Coronavirus Emergency

To be clear, the actual story is cause for concern, or at least awareness. Assuming the central reports are accurate, the DOJ apparently wants Congress to draft new legislation that would empower courts to detain arrested individuals indefinitely while the courts are shut down or delayed during this crisis. The problem is that this infringes rights protected by the Sixth Amendment, and, as mentioned, one can be forgiven for assuming that AG Barr might not give a damn. Both Rolling Stone and Politico do acknowledge that legislation of this nature is unlikely to find any purchase in the current House of Representatives, but it is not the story itself that prompted me to write this post.

I wanted to call attention to the psychological effect of the Rolling Stone headline. With a constant awareness that we have a president who is ignorant about the Constitution and an AG who has shown contempt for the Constitution, that headline almost immediately provokes dystopian mental montages. Before one even chooses which emoji to click, one cannot help but conjure images of smashed presses and jackbooted thugs suppressing speech as Barr takes an Orwellian Sharpie to pesky items like the establishment clause. The whole proto-fascist narrative plays out in the time it takes to share the headline with a comment like, “This is what these guys have wanted all along.” But who reads the story?

Ascribing authoritarian motives to this administration is at least half true, which is one reason why sensational headlines can be so dangerous—because we need to know who is trying to cross which lines and why. We are at a very precarious moment in history—not only because we are deeply concerned for our safety, but because American institutions have been under assault since long before we collided with the vector of Covid-19—and long before Trump and his acolytes brought their own sledgehammers to the party. 

As with the harm to journalism, the abandonment of institutions and the devaluation of expertise is a dire consequence of “democratizing” information through digital platforms. We exacerbate the problem by sharing fragments and impressions that feed anxieties that—perhaps because they are plausible—are the concerns most in need of informed skepticism. 

Now that most of us have segregated in an effort to mitigate the spread of a literal virus, those of us fortunate enough to have the time and ability to keep up with the feeds, might also do what we can to mitigate the spread of viral misinformation. To that end, it would probably help to put some distance (i.e. time) between encountering a headline and clicking Share. There is no urgency to respond to a story or to share it immediately. That urgency is an illusion fostered by the medium itself, and our responses to the stimuli principally serves the platform company’s interest in data-harvesting. 

If Facebook users, for instance, committed to not sharing anything until they’ve read it, this might help slow the rate of misinformation. Better yet, before sharing, why not take a moment to provide friends with a summary of what the story actually says, or fails to say? Doing this would emphasize how often stories are out of synch with their headlines. In a time when we have plenty of reasons to be worried and plenty of reasons to be angry, it is especially important that we worry about, and are angry about, things that are actually true. 

This seems like a very good time to step outside the whirlwind of what scholar Alice Marwick calls our deep stories and apply some critical thinking, even if this means taking a moment to look for counterfactuals in a story about some party or entity who deserves some measure (or a whole truckload) of our scorn. There has never been a time when accurate information matters more than it does right now. Social media, in many ways not always visible, is designed to frustrate that need. If we have the time, and take the time, to provide badly needed context for one another, a social platform can be a wonderful source of useful information; but absent that context, the deluge of images and headlines alone can be a steady flow of gasoline on an already smoldering fire. 

Also see:  Reducing the Spread of Misinformation Online from the Markkula Center for Applied Ethics at Santa Clara University. 


Virus art by: Kateryna_Kon

Flipping the Narrative on the Effects of Social Media

Many years ago while still in college, I was on the train to New York City—a beautiful ride along the eastern banks of the Hudson River.  Several rows from me sat a family of American tourists who caught my attention when I heard the dad say, “Look kids, there’s Alcatraz.”  

Reasonably confident that Alcatraz sits on an island in San Francisco Bay, I glanced over to see the man pointing across the river and his two children gazing at the fortress of the Military Academy at West Point.  The layers of incorrectness in this guy’s armchair tour-guiding is more or less the kind of “information age” social media has amplified at an unprecedented scale.  And I remain unconvinced that there is a policy, either public or private, that can do much about it. 

In her June 13 article on FastCompany, former Google VP of Communications Jessica Powell recommends a behavioral and cultural shift whereby those who work for Silicon Valley join, rather than scorn, the conversation about regulatory and practice changes in her industry …  

… we tend to close ranks when our industry is criticized. We view an attack on Facebook’s handling of content moderation, for example, as something that might threaten all the legal protections given to platforms—and if we’re Twitter, YouTube, Reddit, or any other social platform, we tend to go silent. Rather than providing a more nuanced critique about what Facebook may be doing wrong (or right), we attack the outsiders as Luddites who want social media apps to remove all speech we don’t like or demand that platforms like Facebook should hire a million humans to moderate controversial content. 

Kudos to Powell for identifying the habits of her colleagues and for proposing the very reasonable notion that “Some of the best ideas about how to sensibly regulate tech can probably be found in the Valley….”  Fair enough.  And by all means, cooperation would be a refreshing change coming from that industry. But what if it is necessary to flip the narrative on the nature of what we’re really talking about?  

Powell does not acknowledge in her article the Olympian hubris with which Silicon Valley has proclaimed its innovations to be so universally beneficial for society that the critics should shut up and thank them for their largesse. Remember that all proposals, even those seeking to mitigate new forms of tech-enabled crime, have generally been rebuffed with some variation on the caveat that we must avoid stifling the greatest tool for democracy ever created.  

That premise continues to distort the nature of the conversation, and it is probably false.  Rather than assuming a platform like Facebook is a positive social force with a few negative effects that need mitigating, it may actually be the case that it is a negative social force with a few nice qualities.  We can wish one another Happy Birthday, keep up with our friends, and even have some very substantive discussions; but what if social media as an information source is fundamentally toxic to democratic institutions and we have to address it in those terms?   That would be a very different conversation from the one being had right now, and I cannot imagine “the Valley” Powell describes would be eager to table the premise that much of what they do is, on the whole, destructive.  

In a blog post for Luminate, an organization that funds and supports efforts to improve democratic institutions around the world, David Madden writes about tackling Digital Threats to Democracy …

“Over the last nine months, three of the world’s biggest countries have held elections: Brazil, Nigeria, and Indonesia….Social media posed a clear threat to the elections of all three countries.

 “…a video on Facebook three days before the polls suggested that the [Brazilian]Workers’ Party Candidate Fernando Haddad was planning to distribute ‘gay kits’ to child care centers. 

… a rumor that [Nigerian]President Buhari was dead and that a body double was serving in his place. This rumor was so widely shared that President Buhari had to publicly deny that he had been ‘cloned’.

Online “hoaxes” increased dramatically during the [Indonesian]campaign and the deadly violence that followed the announcement of the election result was the natural conclusion of the incendiary battle fought on social media and WhatsApp.”

These, and many stories like them, are the reason the conversation is finally being had about platform moderation and/or regulation.  Congress just last week held hearings on the subject of “deepfakes” because it is clearly the next technological innovation about to be weaponized and aimed at democratic institutions.  But this kind of purposeful disinformation, as devastating as it has proven to be, may be more easily mitigated than the ordinary, subtle effect the new “commons” has in steadily eroding the delicate fabric that holds liberal democracies together.  

For instance, because it’s in my wheelhouse, I’ll note a recent blog post published by my friends at Creative Future on the topic that Google has funded academics who just happen to espouse anti-copyright views.  When I scrolled by their post on Facebook yesterday morning, there were 260 comments, so I took a peek.  I know. Never read the comments.  But the problem with that rule of thumb is that the comments are us.  Bots and trolls notwithstanding, they are an anthology of what we think and why we think it, except that we are perhaps just egomaniacal enough that we like to believe the peanut galleryis everybody else.  

Just in response to this one blog post, commenters unpacked their views on liberals, conservatives, capitalism, socialism, climate change, and academia overall, plus at least one reference to Nazis and, of course, one guy reciting Scripture.  It’s like a Richard Scary book illustrating Crazytown, where the village hosts a public forum on one topic, and a literal food fight would be a step forward in thoughtful discourse.  Whether in agreement or not with a given post—even just straight reportage—the subject is too-often subsumed by other matters about which the commenters seem equally uninformed. Look kids, there’s Alcatraz! 

We are all ignorant about a great many things.  Even the most gifted astrophysicist who knows way more than you and I about the cosmos is still searching for what she does not know.  But with regard to the kind of informed public that is understood to be essential for the survival of a democratic society, the capacity of social media to amplify misinformation is not only unprecedented, but it is not limited to the most obvious forms of chicanery.  The effects are subtle and mundane.  The simple act of typing and publishing a misinformed comment more deeply etches a false narrative into one’s world view.  Multiply this phenomenon across every story on every topic, and it is little surprise that democratic institutions are in dire straits. 

As others have noted, one of the greatest hazards posed by “deepfakes” technology is the prospect of  universal plausible deniability—the opportunity for anyone to claim that video evidence of them saying or doing something is fake when it is not.  Anticipating that environment feels as though we are standing on the edge of an event horizon different from the technological singularity predicted to occur when the machines become self-aware.  In this scenario, the singularity is caused by the paradox of infinite doubt—a gravitational force from which reliable information cannot escape because there is no longer sufficient consensus as to what a reliable source looks like.  

That may be needlessly pessimistic, but to the extent that we already see evidence of this phenomenon having tangible and devastating effects, social media must be recognized as an underlying cause of the problem, which means that it is unlikely to be its own antidote. Certainly not without a very different conversation that begins with Jessica Powell’s friends and colleagues dropping their making the world better rhetoric.  Because it seems abundantly clear that they are doing no such thing.