Social Media’s Power to Manipulate

The FCC, in a narrow vote this week, elected to adopt rules to protect the principle known as “net neutrality.” The agency will now regulate broadband as a public utility in order to ensure that ISPs cannot discriminate between one kind of customer and another, namely that they may not speed up traffic for higher paying users or slow it down for lower paying ones.  Many view the vote for “net neutrality” as a win for universal digital rights, others see it as government overreach into the free market; and both sides claim to be on the side of free speech.  I have expressed doubts before about some of the more extreme fears of a world without net neutrality, and Alex Pareene, writing for Gawker, reminds us that the “win” in this case can be credited to what he calls a “cartel” of Internet industry giants like Google, Microsoft, eBay, Facebook, and Amazon. Whether or not net neutrality is essential for maintaining a level playing field for competing interests, one rhetorical talking point overused by all parties is this idea of preserving the Internet as “the greatest tool for free expression and democracy.”  It ought to be, but the more I consider this premise, the more I wonder if it may prove to be one of the worst lies of the digital age — no matter how fast it travels through the proverbial tubes.

In an article posted on Ars Technica, cryptographer and security expert Bruce Schneier explains exactly how easy it can be to manipulate public opinion through social media.  I’ve been on this kick since starting this blog — the idea that more expression can actually make the electorate less well informed, not because people are necessarily dumb or lazy, but because the way in which we take in information now is so heavily bombarded with aggregated impressions.  Unless one really has time to research and calmly consider every story that might pop up on a Facebook feed, for instance, it’s almost impossible not to be influenced by the constant flow of impressions being made with images, headlines, and memes.  The more these impressions jibe with our own biases, the more they solidify those prejudices, making us less receptive to ideas that might challenge our thinking.  And because a walled garden like Facebook tends to expose us to items based on our group of like-minded Friends and on an algorithmic interpretation of our tastes and interests, the experience is far more circumscribed than we might necessarily notice. Schneier offers a relatively simple example of possible political manipulation thus:

“During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.”

The implications of that are rather staggering.  Forget lobbying and other forms of corporate meddling in the political process.  A vested interest could sway an election at the local, state, or federal level without anyone really noticing, and paradoxically by using these same technologies we believe provide us with better insight and a stronger voice in the process. The Internet can hardly be a tool for transparency, if we’re each looking through our own opaque set of lenses; but then combine this habit of human nature with  manipulation of the data, and you get the opposite result of the new enlightenment that was supposed to come with the digital age. Again from Schneier:

“The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with.”

I think Pariser’s “filter bubble” accurately describes the human component that is so often excluded from the discussion, but I will also be presumptuous enough to examine this notion of “an opinion you don’t agree with.”  Depending on how we define that phrase, I actually find the social media experience is chockfull of opinions with which I disagree and that I could spend an unreasonable amount of time sifting through all those opinions in search of competing ideas. After all, opinions and ideas are not quite the same thing. Competing ideas are about problem solving. Competing opinions are mostly theater, and media loves theater. Cable TV news produced many years worth of passive theater comprising competing opinions in the service of few ideas.  Social media turns this into participatory theater that adds the element of narcissism, which serves to exacerbate the divisiveness in our political process.  In short, I suspect the environment is ideal for manipulators to subtly manipulate political outcomes without us  noticing.  The promise that the Internet “democratizes” information certainly sounds progressive, but the ways in which we interact with these tools as they are designed doesn’t necessarily foster progress; and to Schneier’s point, it doesn’t have to be the least bit democratic.

The Opaqueness of Transparency

It isn’t just perception.  Partisan politics in the U.S. really is worse than ever, if we’re to take the word of those who’ve been on the inside for the last 40 or so years. I was listening to an audio version of Tom Brokaw’s book The Time of Our Lives recently, and hearing him describe today’s dysfunctional intransigence in Washington, I began to wonder why, in the age of so much transparency and mass communications, do matters appear to be getting worse? More to the point, is it possible that we’ve created an illusion of transparency while ignoring the fact that the way we tend to use digital media produces the opposite of rational and cordial discourse among both the electors and the elected?

Brokaw writes, “…modern means of communication are now so pervasive and penetrating, they might as well be part of the air we breathe and, therefore, they require tempered remarks from all sides.  Otherwise that air just becomes more and more toxic until is is suffocating.”  Sounds a lot like the blogosphere to me.

Those who vehemently pursue transparency through technology — everyone from hacktivists to open-government scholars– offer the premise that transparency through Web technology is not only good, but a near panacea to our political ills.  And while we certainly don’t want to see our elected officials get away with crimes and misdemeanors, I’m not convinced that the theater of rapid-response outrage we’ve created does much to thwart real mischief so much as it incubates some of the more toxic viruses in day-to-day governance — namely blind partisanship and associative reasoning.

The promise of transparency is meant to be an independent voter’s ideal — that with digital access to real data, one can make unbiased decisions based on the particulars of a given situation. In theory, information trumps partisanship. Through on-demand access to raw information and fact-checks, the argument goes, we can more accurately judge our elected officials as individuals rather than broadly associating them with the views of a particular party.  So why does our national dialogue sound more and more like a cacophony of lunatics?

One problem with the case for this kind of transparency is that it assumes data are neutral, which is a very techie point of view because to a computer, of course, data are neutral and interpreted by a fairly rigid code. In human affairs, and politics in particular, data are subjective and interpreted by a code called emotion that is both subjective and dynamic. Computers like data, humans like stories. That’s why an editorial about a proposed bill in congress beats reading the bill itself and a catchy, 140-character headline beats both.

While access does exist to unbiased, raw data, this access seems to have very little to do with how Web 2.0 is affecting our political evolution. To the contrary, social media is highly emotional and is referred to as a “hive mind” for good reason. Hence, the instinct to react, not only as individuals, but as mobs has been given an outlet through these technologies.  What we often end up with is our worst political instincts on speed pretending to be a more enlightened process.  If anything, the way we use social media and blogs seems to foster more associative reasoning, which allows (or forces) all issues to be painted with very broad brushes. This is the opposite habit that transparency is meant to produce.

Look at the way the tech blogs lit up last week over Rep. Lamar Smith’s appointment to the chairmanship of the House Science Committee.  It’s one thing if Representative Smith has a dodgy record on actual science, but TechCruch and others ran headlines decrying the appointment because Smith was the lead author of SOPA.  Even if you hated that bill for what it was, calling it anti-science or anti-technology makes as much sense as calling speed limits anti-Lamborghini. It’s a straight-up cheap shot with a clear political agenda. After all, Smith is a Texas republican and the author of SOPA. So, attacking him is good for scoring points among progressives, who will never bother to make the distinction that SOPA had nothing to do with science; and neither will they bother to look up Smith’s record on science issues, even though they could with a couple of mouse clicks.  In this case, the tech blogs are behaving much like FOX News, looking at all stories through a single filter.

I bring up this example because it’s recent, but also because some of those bloggers are the same folks who proclaim the unmitigated value of transparency while using the technology to promulgate more of the opaque, associative political nonsense that makes our politics so dysfunctional.  As a side note, Smith’s record on science is relatively unclear at this point, other than past remarks doubting the veracity of some climatologists; but let’s not confuse that with bills designed to stop an international criminal enterprise, shall we?

What we think of as transparency is often a lot of reactionary noise that can literally be a barrier to a better functioning representative government. Sure there are a lot of folks in congress with some pretty wacky ideas, but why does it seem that even moderate representatives can’t sit down to rationally discuss issues that shouldn’t even be partisan in the first place? Might the digital, global microscope be a cause for divisiveness itself?  We have to imagine governing — and heaven forbid compromising! — in an environment where every syllable, every meeting, every gesture inspires instantaneous, and often erroneous, condemnation that goes viral.

Mass media, especially the blogosphere, demands conflict because humans like stories.  But representative government can only function through compromise and cooperation, which fails to satisfy multiple constituencies at any given moment — and now, they’re all on Twitter. Hence, it seems only one of two things can result from all this so-called transparency:  1) that governance stalls; or 2) that functional governance can only happen in even greater secrecy than we had before the digital age.  It certainly wouldn’t be the first time technology has produced exactly the opposite conditions it promised.

It’s true that with a lot of time and effort, we can use the Internet to look objectively through a clear glass at our politics; but I suspect that most of the time, the window is truly opaque and that we’re always seeing at least a half reflection of ourselves.  If the people’s representatives are dysfunctional, then it’s possible that the people are as well.  The question remains as to how the design of these technologies might be playing a role in that dysfunction.