Photo by LisaD
If it feels just a little bit like the world is careening toward the edge of a cliff with a madman at the wheel, maybe it’s because that’s what’s happening. Except the madman isn’t just some garden-variety berserker. It’s not President Trump with his incoherent tweets and unabashed lies. In fact, according to this in-depth story by Carole Cadwalldr, writing for The Guardian, those antics are calculated theater being fed to the press as just one component of a much larger, more insidious process by which money and computing power are undermining democracy itself.
Unfortunately, it turns out that the madman at the wheel is us. All of us. Voluntarily feeding the database via social media, teaching the system exactly how to tell us what to think. “There are two things, potentially, going on simultaneously: the manipulation of information on a mass level, and the manipulation of information at a very individual level. Both based on the latest understandings in science about how people work, and enabled by technological platforms built to bring us together,” Cadwalldr writes.
Focusing primarily on billionaire/computer-scientist Robert Mercer, his relationship to Stephen Bannon and the development of the Breitbart network, Cadwalldr details the process by which wealthy, mostly right-wing, individuals are using computing power not to understand the electorate (that would be old-school) but to manipulate the electorate into reshaping the world as these individuals believe it ought to be shaped. Cadwalldr’s is one of several articles to appear in recent weeks about the role of Big Data in global politics, and just one to mention the company Cambridge Analytica. She writes…
“On its website, Cambridge Analytica makes the astonishing boast that it has psychological profiles based on 5,000 separate pieces of data on 220 million American voters – its USP is to use this data to understand people’s deepest emotions and then target them accordingly. The system, according to Albright, amounted to a ‘propaganda machine’.”
Cadwalldr cites several scientists who voice deep concern about the capacity of the AI to learn about us through interactions as apparently innocuous as Facebook Likes. With the input of 300 Likes, Cambridge Analytica claims that the computer can understand us “better than we understand ourselves.” And because the computer never stops learning, never stops adapting, this provides the opportunity for political operatives to steer the electorate toward targeted conclusions about issues or candidates.
The irony, of course, is that this is all based on the illusion that digital technology provides us with more choices to get “better” information, which is why discrediting the “mainstream media” is a key component of the strategy. But lest we believe press-bashing is exclusively a Trumpian phenomenon of the moment, we should not forget that the “mainstream” has previously been dismissed by the left and libertarian-leaning, techno-utopians of Silicon Valley as well—all singing from the hymnal that the internet is the greatest tool for democracy ever invented.
Regular readers know that I am more than a little cynical about this generalization and that this is one reason I remain critical of “digital rights” groups fearful of any form of regulation in cyberspace—particularly regulation meant to protect or restore the rights of individuals. It’s not that I find fault with the premise that the goals of openness and free speech should be protected online so much as I balk at the assumption that an absolutist approach (i.e. law has no place on the internet) can only have salubrious results for democratic values. I believe this sensibility is a holdover from Silicon Valley’s more hippie-like early days but is a vibe that no longer has any relationship to the advertising and data-mining systems the major companies have built.
But now that we’ve mostly “left the internet alone,” allowing these companies to collect and sell information about us without any kind of rules—allowing these same companies to monetize works of authorship and social interactions without restraint—all in the name of “freedom,” we are apparently teaching the machine to effectively democratize democracy out of existence. Cadwalldr quotes Jonathan Rust of the Cambridge University Psychometric Centre thus:
“The danger of not having regulation around the sort of data you can get from Facebook and elsewhere is clear. With this, a computer can actually do psychology, it can predict and potentially control human behaviour. It’s what the scientologists try to do but much more powerful. It’s how you brainwash someone. It’s incredibly dangerous.”
The manipulation skews right. For now.
Take a subject like immigration policy and the fact that many of us who live or work in diversely populated urban centers (traditionally liberal) can’t understand why Americans who live in more homogenous, rural communities (traditionally conservative) are so concerned that Muslim refugees pose a substantial threat to security despite a lack of evidence to support this fear.
It’s not because citizens are uninformed, it’s because they are purposely misinformed by a very sophisticated network of well-crafted, smartly-written articles that contain elements of truth glued together by rhetorical paranoia. This is in fact the structure of the average Breitbart article on immigration; and these articles become the foundation of a million ways to automate the spread and repetition of an anti-immigrant message until it morphs in the minds of readers from emotional xenophobia to what is perceived as rational security policy. This is why labeling support for an EO immigration ban as “racist” sounds absurd to many and why their response will be, “It’s not racist. It’s just common sense. Look at the mountain of evidence! The mountain of evidence the MSM isn’t reporting!” (Never mind that the mountain is a hologram.)
This phenomenon is hardly restricted to the political right, though Cadwalldr observes that the money and institutions behind this level of big-data manipulation is largely a right-leaning agenda at the moment. My own concern has always been that these manipulation tools, neatly disguised as “democratized” information, can be wielded by any entity with the resources. If the pendulum were to swing from Breitbart to Google or to some left-leaning billionaire’s project, that still wouldn’t be democracy.
The ability to create the appearance of consensus through rapid replication, a network of “alternative news” sites, and bot-swarms, all emanating from a single source is exactly the concern that launched this blog in 2012. The illusion that hundreds of articles or millions of people all “agree” on a given topic can be conjured by a relatively small and nimble group of people with the money and computing power to do the job. I alluded to this concern in this post in 2012, suggesting that Citizens United was child’s play compared to the capacity for manipulation of the political process on a one-to-one basis via social media.
In 2011/12, this sophisticated kind of disinformation was what I believed was happening with a bill like SOPA. Now, it’s the same scenario on a much larger scale, influencing the governments of the world. The implication is the destruction of democracy by means of the very tools that were supposed to improve democracy. And the irony is that it’s all voluntary. Every day, we get up and feed the beast. We teach the machine how to manipulate us. So, is the only solution to abandon Facebook et al — to stop feeding ourselves as data to the machine? Can we even afford to unplug given that these platforms are now almost indispensable for access to information and to substantive interaction with people? A Columbia Journalism Review study on the effects of the Breitbart media ecosystem, offers these words of wisdom and hope:
“Rebuilding a basis on which Americans can form a shared belief about what is going on is a precondition of democracy, and the most important task confronting the press going forward. Our data strongly suggest that most Americans, including those who access news through social networks, continue to pay attention to traditional media, following professional journalistic practices, and cross-reference what they read on partisan sites with what they read on mass media sites.”
So, maybe we don’t have to break the internet so much as break a lot of really bad habits the internet keeps trying to teach us.
Also read: The Rise of Weaponized AI here.
Leave a Reply