Social (Media) Distancing

Between the headline and the Share button.

Access to credible, useful information could not be more essential than it is in the present moment. But as we are all presumably more attentive than ever to our social media feeds, we are correspondingly bombarded with more garbage content. This crisis is a perfect opportunity for trolls to ply their trade. Whether it’s idiots having a laugh, professional mischief-makers working for foreign agencies, or any number of vested interests, there is no shortage of intentionally misleading material online. But that may not be the greatest concern.

Unfortunately, the expansion of the news market—from the earliest days of cable TV to the breadth of Facebook’s role as a virtual newsstand—has forced even venerable sources to take a more slapdash approach to their reporting. In order to remain relevant (i.e. extant), organizations with distinguished pedigrees are chronically guilty of publishing stories designed to grab, terrify, and outrage more than they are to inform or promote thoughtful dialogue. Almost worse than that, even if the reportage is soundly crafted, the headlines are too often screaming at us because they are designed to promote (mostly negative) social media interaction. And far too many of us are guilty of reacting to and/or sharing only the headlines, where the distinctions between accurate and inflammatory can be rather subtle.

For instance, while acknowledging that we are justified in distrusting Attorney General William Barr on the grounds that he shows little respect for constitutional principles, let’s compare two headlines in which the Rolling Stone follows up on a story first reported by Politico

Politico: DOJ seeks new emergency powers amid coronavirus pandemic. 

Rolling Stone: DOJ Wants to Suspend Constitutional Rights During Coronavirus Emergency

To be clear, the actual story is cause for concern, or at least awareness. Assuming the central reports are accurate, the DOJ apparently wants Congress to draft new legislation that would empower courts to detain arrested individuals indefinitely while the courts are shut down or delayed during this crisis. The problem is that this infringes rights protected by the Sixth Amendment, and, as mentioned, one can be forgiven for assuming that AG Barr might not give a damn. Both Rolling Stone and Politico do acknowledge that legislation of this nature is unlikely to find any purchase in the current House of Representatives, but it is not the story itself that prompted me to write this post.

I wanted to call attention to the psychological effect of the Rolling Stone headline. With a constant awareness that we have a president who is ignorant about the Constitution and an AG who has shown contempt for the Constitution, that headline almost immediately provokes dystopian mental montages. Before one even chooses which emoji to click, one cannot help but conjure images of smashed presses and jackbooted thugs suppressing speech as Barr takes an Orwellian Sharpie to pesky items like the establishment clause. The whole proto-fascist narrative plays out in the time it takes to share the headline with a comment like, “This is what these guys have wanted all along.” But who reads the story?

Ascribing authoritarian motives to this administration is at least half true, which is one reason why sensational headlines can be so dangerous—because we need to know who is trying to cross which lines and why. We are at a very precarious moment in history—not only because we are deeply concerned for our safety, but because American institutions have been under assault since long before we collided with the vector of Covid-19—and long before Trump and his acolytes brought their own sledgehammers to the party. 

As with the harm to journalism, the abandonment of institutions and the devaluation of expertise is a dire consequence of “democratizing” information through digital platforms. We exacerbate the problem by sharing fragments and impressions that feed anxieties that—perhaps because they are plausible—are the concerns most in need of informed skepticism. 

Now that most of us have segregated in an effort to mitigate the spread of a literal virus, those of us fortunate enough to have the time and ability to keep up with the feeds, might also do what we can to mitigate the spread of viral misinformation. To that end, it would probably help to put some distance (i.e. time) between encountering a headline and clicking Share. There is no urgency to respond to a story or to share it immediately. That urgency is an illusion fostered by the medium itself, and our responses to the stimuli principally serves the platform company’s interest in data-harvesting. 

If Facebook users, for instance, committed to not sharing anything until they’ve read it, this might help slow the rate of misinformation. Better yet, before sharing, why not take a moment to provide friends with a summary of what the story actually says, or fails to say? Doing this would emphasize how often stories are out of synch with their headlines. In a time when we have plenty of reasons to be worried and plenty of reasons to be angry, it is especially important that we worry about, and are angry about, things that are actually true. 

This seems like a very good time to step outside the whirlwind of what scholar Alice Marwick calls our deep stories and apply some critical thinking, even if this means taking a moment to look for counterfactuals in a story about some party or entity who deserves some measure (or a whole truckload) of our scorn. There has never been a time when accurate information matters more than it does right now. Social media, in many ways not always visible, is designed to frustrate that need. If we have the time, and take the time, to provide badly needed context for one another, a social platform can be a wonderful source of useful information; but absent that context, the deluge of images and headlines alone can be a steady flow of gasoline on an already smoldering fire. 

Also see:  Reducing the Spread of Misinformation Online from the Markkula Center for Applied Ethics at Santa Clara University. 


Virus art by: Kateryna_Kon

Deepfakes & The Choice to Be Deceived

Immediately after the 2016 election, many Americans discovered just how much fake news they were sharing via social media.  And for about ten minutes, the term fake news had a specific and literal meaning; it referred to fabricated stories made to look like news, and which serve either as clickbait to generate ad revenue or as mischief to fan the flames of political discord.  But then, the president co-opted the term as a way to dismiss any reportage that does not jibe with his myriad, fact-challenged narratives, and fake news no longer means anything at all. 

Now, the unreal is about to get a lot more real—and more dangerous.  The technology known as “deepfakes” enables fairly unsophisticated users to produce video evidence of events that never happened.  As highlighted in this CNN report on the subject, Senator Marco Rubio (R-FL) raises the very plausible fear that, in this next election cycle, we are going see video clips showing elected officials and candidates doing and saying things that are entirely fake, but which look absolutely real. “I believe this is the next wave of attacks against America and western democracies,” Rubio stated in a hearing with the Director of National Intelligence.

And that’s not necessarily the worst effect of deep fakes, at least with regard to news and politics.  As, Hany Farid, a digital forensics expert interviewed in that CNN report, observes, an equal—if not worse—hazard confronts us when people inevitably cry “deepfake” on visual evidence that is indeed factual.  Think about how often President Trump changes his story on just about everything and is then checked against his own prior statements captured on video.  All he, or his spokes-minions, have to do is recite the incantation “deepfake,” and the record is expunged in the minds of millions.  Not that this same folly will not occur among other segments of the electorate, but Trump provides the most obvious, stark, and timely reference in this regard.

Naturally, the anticipation that deep fake technology will be used as a weapon of information warfare leads to the assumption that the remedies will also be technological.  The Pentagon has already called the potential abuse of deepfakes a threat to national security, and Farid makes the logical prediction that social media platforms like Facebook and YouTube will need to deploy deepfake detection software to warn viewers.  But it also stands to reason that faking software will only improve, quite possibly to the extent that it cannot be detected by counter-fake technology.  And even then, can any kind of technical metering overwhelm the psychological instinct to believe what we want to believe?

The truth about our fallibility, as filmmaker Errol Morris’s tells us, is that believing is seeing, and not the other way around.  While images can inform, they just as often lie like crazy, not only because we are hardwired to see what we want to see in recorded images but also because, as Susan Sontag writes,  “…the camera’s rendering of reality must always hide more than it discloses.”

Consider the recent story that began with a viral video clip that appeared to show MAGA hat-wearing teenagers openly mocking a Native American at a rally in Washington D.C. Then, a second video capturing the same events revealed a much broader context that at least alters the original narrative about those kids’ behavior, and possibly undermines it altogether.  Either way, it is impossible to imagine how the addition of deepfakes into this already-volatile environment will not make matters worse.  So, what is the solution to this new form of sophisticated, weaponized information?  

No doubt, there is more than one answer to that question, but, as I’ve opined in the past, I think the only hope is a cultural shift in us as information consumers and not a technological fix on the part of the platform owners.  This might mean, as it did for me, abandoning social platforms as a primary source for “curated” information.  But no matter how we choose to filter information, we have to stop pouncing on every photograph and video clip as evidence to support our “deep stories.”  At the same time, professional journalists must stop trying to keep pace with the shrieking frenzy of social media.

For instance, I initially heard about that D.C. clash on CNN, when they cited the first viral video as evidence that a mob of teenagers had indeed assaulted a Native American elder.  The anchor reporting the story even editorialized with a scornful word or two about the kids’ conduct.  But then, CNN followed up, reporting that a second video shows a “different side of the encounter,” and they hosted an interview with Nathan Phillips (the Native American), which also skews the story considerably from the way it was originally reported.  But does CNN’s follow-up do enough to build any kind of consensus around the truth?

When I first started this blog, the trending videos at that time were coming from the cellphones of Occupy Wall Street attendees, usually depicting apparent acts of police brutality against allegedly peaceful protestors.  Clearly, such incidents did occur, but at the same time, the omnipresence of cameras—especially at a movement that quickly devolved to activist tourism—helped to foster an illusion that the people’s images are the “real” truth, even to the extent that citizen journalism has eroded trust in professional journalism.  

This is not to say that amateur video cannot tell us anything.  Surely it can.  But the inexorable deployment of deepfakes, which will probably be most effective when disguised as citizen journalism, will be all the more hazardous if we cannot trust real journalists to provide context, corroboration, or correction for what we think we’re seeing.  In this regard, CNN’s own deepfakes reporting might serve as a cautionary tale to its main news desk (and every other news organization) that the visual “evidence” they obtain via social media and other outside sources should be treated with a level of scrutiny as though it were mere rumor.  And, as consumers, we should begin to do the same.


Photo by kiosea39

On New Models, Journalism, and Digital Advertising

It was encouraging to see our most prominent millennial Member of Congress, Rep. Ocasio-Cortez (D-NY) recognize the link between a healthy democracy a professional class of journalists. On Friday, presumably in response to the startling number of layoffs at BuzzFeed, @AOC tweeted this:

True to form, Mike Masnick of Techdirt replied:

It is ironically quaint at this point to see anyone, even Masnick, still using the “buggy whip” metaphor.  I mean could the term beat a dead horse be any more appropriate?  The buggy whip was always a stupid reference because horse-drawn vehicles are, in fact, obsolete, while the content that big tech companies exploit and devalue (like journalism) is clearly still very useful and in demand.  

Several years ago, the “adapt to new models” narrative was just dumb magical thinking.  But today, we have ample evidence to call this talking point a demonstrably failed proposition.  I guess it’s good that Masnick did not suggest journalists should tour, sell merch, or find new ways to connect with their fans; but still, Mike should go lie down by his dish and think about what he’s done.  

There may be new models in the sense that we enjoy new ways to access and experience content—be it news or entertainment—but there are no truly novel economic models to support the production of content in a free market.  The revenue needed to pay reporters, writers, etc. comes from consumers or it comes from advertisers.  Everything else is alchemy.  And while there are certainly many other factors external to Facebook and Google that have changed the nature of journalism and our relationship to it, the market reality for news and other content creators is that the major internet companies systematically poisoned both revenue streams.

First, the industry laid siege to the principles of copyright and promoted a faux-populist (frankly childish) message that all content must be free.  Then, they helped fulfill the promise of free by erecting giant tollbooths that siphoned off the lion’s share of the available ad revenue, which would otherwise go directly to content creators like journalists.  It’s funny that the free-content, anti-copyright crowd tend to mock as anachronistic any news organization that would presume to put up a paywall, but that’s exactly what Facebook is—a paywall.  No, we don’t pay to use it, but the content creators pay with the lost revenue they rightly earned.

It is especially funny (or sad) that Masnick would bring out a variation on the adapt message in context to BuzzFeed, which IS a new model.  It was built as an online-only platform that would be free to consumers, and it was designed with social media in mind.  Yet, as the New York Times reports, founder Jonah Peretti believes the solution to the Facebook/Google problem may be a merger of several digital news networks into a group that can negotiate better terms for ad-revenue sharing.

But, again, notice how there’s no “new model” there.  It’s just an old model called advertising now dominated by two massive companies.  And the fact is that news media companies have adapted, although in the ever-changing landscape of platforms like Facebook, it is probably more accurate to say that they have reacted in ways that are of little value—economic or social—to the purpose of journalism.

In October of 2018, Alexis C. Madrigal and Robinson Meyer, writing for The Atlantic, reported that several news companies laid off dozens of reporters, mostly writers, to make room for video production resources in an effort to capitalize on Facebook’s new video initiative.  Citing a lawsuit pursuant to Facebook’s allegedly misrepresenting the data on video impressions for advertisers, the authors write…

During the period of purported wrongdoing, from July 2015 to June 2016, journalists and newsroom leaders across the country worked to cover an unprecedented presidential campaign in an information landscape that Facebook was constantly, and erratically, transforming. Even if, as Facebook argues, it did not knowingly inflate metrics, it set up new and fast-changing incentives for video that altered the online ad market as a whole. 

So, even if adapting to video had proven remunerative for news companies, this is still not a good environment for journalists, or for the public that relies on their work.  News organizations should focus on doing the best job of reporting the news, not figuring out how to navigate the opaque and erratic landscape of Facebook.  As I say, that’s not adapting, it’s reacting; and that same Atlantic article cites one example that makes this point.

There is something seriously flawed in the narrative that BuzzFeed potentially broke an important story this month about Michael Cohen’s testimony and then had to decimate its national news team last week–but that, in 2016, they spent resources making a viral video featuring two employees exploding a watermelon.  That is adapting to new models? Hard-news supported by an old Gallagher joke?  And it didn’t even work.  “BuzzFeed never repeated its success,” write Madrigal and Meyer. “But that didn’t stop reporters from being taken off the line of duty, while a promotional video of water being poured on permeable concrete racked up 100 million views.”

Meanwhile, as intermediaries collect the ad revenue that content creators like journalists generate, the advertisers themselves may be getting a raw deal themselves.  Facebook’s allegedly fraudulent reporting of video-view metrics is consistent with other evidence suggesting that trouble in the digital advertising market may be far from over.  As cited in a recent post, Max Read of New York Magazine tells us that a staggering amount of the internet, at any given moment, may be fake.  Read writes …

Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake.

What all that means for advertisers, of course, is that they’re not getting the impressions they’re paying for, let alone the quality impressions digital ad sellers continue to promote. If this is the case, it implies that another reckoning may be at hand between the major advertisers and Facebook and Google.  Wouldn’t it be interesting if the solution for both advertisers and news organizations is that the brands return to buying more media from the news sites themselves rather than the intermediaries?  Yeah, I know.  It’s an old model.  But it worked pretty damn well.


Robot image by frescomovie