Immediately after the 2016 election, many Americans discovered just how much fake news they were sharing via social media. And for about ten minutes, the term fake news had a specific and literal meaning; it referred to fabricated stories made to look like news, and which serve either as clickbait to generate ad revenue or as mischief to fan the flames of political discord. But then, the president co-opted the term as a way to dismiss any reportage that does not jibe with his myriad, fact-challenged narratives, and fake news no longer means anything at all.
Now, the unreal is about to get a lot more real—and more dangerous. The technology known as “deepfakes” enables fairly unsophisticated users to produce video evidence of events that never happened. As highlighted in this CNN report on the subject, Senator Marco Rubio (R-FL) raises the very plausible fear that, in this next election cycle, we are going see video clips showing elected officials and candidates doing and saying things that are entirely fake, but which look absolutely real. “I believe this is the next wave of attacks against America and western democracies,” Rubio stated in a hearing with the Director of National Intelligence.
And that’s not necessarily the worst effect of deep fakes, at least with regard to news and politics. As, Hany Farid, a digital forensics expert interviewed in that CNN report, observes, an equal—if not worse—hazard confronts us when people inevitably cry “deepfake” on visual evidence that is indeed factual. Think about how often President Trump changes his story on just about everything and is then checked against his own prior statements captured on video. All he, or his spokes-minions, have to do is recite the incantation “deepfake,” and the record is expunged in the minds of millions. Not that this same folly will not occur among other segments of the electorate, but Trump provides the most obvious, stark, and timely reference in this regard.
Naturally, the anticipation that deep fake technology will be used as a weapon of information warfare leads to the assumption that the remedies will also be technological. The Pentagon has already called the potential abuse of deepfakes a threat to national security, and Farid makes the logical prediction that social media platforms like Facebook and YouTube will need to deploy deepfake detection software to warn viewers. But it also stands to reason that faking software will only improve, quite possibly to the extent that it cannot be detected by counter-fake technology. And even then, can any kind of technical metering overwhelm the psychological instinct to believe what we want to believe?
The truth about our fallibility, as filmmaker Errol Morris’s tells us, is that believing is seeing, and not the other way around. While images can inform, they just as often lie like crazy, not only because we are hardwired to see what we want to see in recorded images but also because, as Susan Sontag writes, “…the camera’s rendering of reality must always hide more than it discloses.”
Consider the recent story that began with a viral video clip that appeared to show MAGA hat-wearing teenagers openly mocking a Native American at a rally in Washington D.C. Then, a second video capturing the same events revealed a much broader context that at least alters the original narrative about those kids’ behavior, and possibly undermines it altogether. Either way, it is impossible to imagine how the addition of deepfakes into this already-volatile environment will not make matters worse. So, what is the solution to this new form of sophisticated, weaponized information?
No doubt, there is more than one answer to that question, but, as I’ve opined in the past, I think the only hope is a cultural shift in us as information consumers and not a technological fix on the part of the platform owners. This might mean, as it did for me, abandoning social platforms as a primary source for “curated” information. But no matter how we choose to filter information, we have to stop pouncing on every photograph and video clip as evidence to support our “deep stories.” At the same time, professional journalists must stop trying to keep pace with the shrieking frenzy of social media.
For instance, I initially heard about that D.C. clash on CNN, when they cited the first viral video as evidence that a mob of teenagers had indeed assaulted a Native American elder. The anchor reporting the story even editorialized with a scornful word or two about the kids’ conduct. But then, CNN followed up, reporting that a second video shows a “different side of the encounter,” and they hosted an interview with Nathan Phillips (the Native American), which also skews the story considerably from the way it was originally reported. But does CNN’s follow-up do enough to build any kind of consensus around the truth?
When I first started this blog, the trending videos at that time were coming from the cellphones of Occupy Wall Street attendees, usually depicting apparent acts of police brutality against allegedly peaceful protestors. Clearly, such incidents did occur, but at the same time, the omnipresence of cameras—especially at a movement that quickly devolved to activist tourism—helped to foster an illusion that the people’s images are the “real” truth, even to the extent that citizen journalism has eroded trust in professional journalism.
This is not to say that amateur video cannot tell us anything. Surely it can. But the inexorable deployment of deepfakes, which will probably be most effective when disguised as citizen journalism, will be all the more hazardous if we cannot trust real journalists to provide context, corroboration, or correction for what we think we’re seeing. In this regard, CNN’s own deepfakes reporting might serve as a cautionary tale to its main news desk (and every other news organization) that the visual “evidence” they obtain via social media and other outside sources should be treated with a level of scrutiny as though it were mere rumor. And, as consumers, we should begin to do the same.
Photo by kiosea39
1 Comment