Social Media’s Power to Manipulate

The FCC, in a narrow vote this week, elected to adopt rules to protect the principle known as “net neutrality.” The agency will now regulate broadband as a public utility in order to ensure that ISPs cannot discriminate between one kind of customer and another, namely that they may not speed up traffic for higher paying users or slow it down for lower paying ones.  Many view the vote for “net neutrality” as a win for universal digital rights, others see it as government overreach into the free market; and both sides claim to be on the side of free speech.  I have expressed doubts before about some of the more extreme fears of a world without net neutrality, and Alex Pareene, writing for Gawker, reminds us that the “win” in this case can be credited to what he calls a “cartel” of Internet industry giants like Google, Microsoft, eBay, Facebook, and Amazon. Whether or not net neutrality is essential for maintaining a level playing field for competing interests, one rhetorical talking point overused by all parties is this idea of preserving the Internet as “the greatest tool for free expression and democracy.”  It ought to be, but the more I consider this premise, the more I wonder if it may prove to be one of the worst lies of the digital age — no matter how fast it travels through the proverbial tubes.

In an article posted on Ars Technica, cryptographer and security expert Bruce Schneier explains exactly how easy it can be to manipulate public opinion through social media.  I’ve been on this kick since starting this blog — the idea that more expression can actually make the electorate less well informed, not because people are necessarily dumb or lazy, but because the way in which we take in information now is so heavily bombarded with aggregated impressions.  Unless one really has time to research and calmly consider every story that might pop up on a Facebook feed, for instance, it’s almost impossible not to be influenced by the constant flow of impressions being made with images, headlines, and memes.  The more these impressions jibe with our own biases, the more they solidify those prejudices, making us less receptive to ideas that might challenge our thinking.  And because a walled garden like Facebook tends to expose us to items based on our group of like-minded Friends and on an algorithmic interpretation of our tastes and interests, the experience is far more circumscribed than we might necessarily notice. Schneier offers a relatively simple example of possible political manipulation thus:

“During the 2012 election, Facebook users had the opportunity to post an “I Voted” icon, much like the real stickers many of us get at polling places after voting. There is a documented bandwagon effect with respect to voting; you are more likely to vote if you believe your friends are voting, too. This manipulation had the effect of increasing voter turnout 0.4% nationwide. So far, so good. But now imagine if Facebook manipulated the visibility of the “I Voted” icon based on either party affiliation or some decent proxy of it: ZIP code of residence, blogs linked to, URLs liked, and so on. It didn’t, but if it did, it would have had the effect of increasing voter turnout in one direction. It would be hard to detect, and it wouldn’t even be illegal. Facebook could easily tilt a close election by selectively manipulating what posts its users see. Google might do something similar with its search results.”

The implications of that are rather staggering.  Forget lobbying and other forms of corporate meddling in the political process.  A vested interest could sway an election at the local, state, or federal level without anyone really noticing, and paradoxically by using these same technologies we believe provide us with better insight and a stronger voice in the process. The Internet can hardly be a tool for transparency, if we’re each looking through our own opaque set of lenses; but then combine this habit of human nature with  manipulation of the data, and you get the opposite result of the new enlightenment that was supposed to come with the digital age. Again from Schneier:

“The first listing in a Google search result gets a third of the clicks, and if you’re not on the first page, you might as well not exist. The result is that the Internet you see is increasingly tailored to what your profile indicates your interests are. This leads to a phenomenon that political activist Eli Pariser has called the “filter bubble”: an Internet optimized to your preferences, where you never have to encounter an opinion you don’t agree with.”

I think Pariser’s “filter bubble” accurately describes the human component that is so often excluded from the discussion, but I will also be presumptuous enough to examine this notion of “an opinion you don’t agree with.”  Depending on how we define that phrase, I actually find the social media experience is chockfull of opinions with which I disagree and that I could spend an unreasonable amount of time sifting through all those opinions in search of competing ideas. After all, opinions and ideas are not quite the same thing. Competing ideas are about problem solving. Competing opinions are mostly theater, and media loves theater. Cable TV news produced many years worth of passive theater comprising competing opinions in the service of few ideas.  Social media turns this into participatory theater that adds the element of narcissism, which serves to exacerbate the divisiveness in our political process.  In short, I suspect the environment is ideal for manipulators to subtly manipulate political outcomes without us  noticing.  The promise that the Internet “democratizes” information certainly sounds progressive, but the ways in which we interact with these tools as they are designed doesn’t necessarily foster progress; and to Schneier’s point, it doesn’t have to be the least bit democratic.

© 2015, David Newhoff. All rights reserved.

Enjoy this blog? Please spread the word :)