For years, producers of creative content—from individual artists to mass-media corporations—have tried to engage with internet companies (mainly Google) in an effort to stop the facilitation of rampant, unlicensed access to their material. Whether the complaint is millions of unlicensed works on YouTube, or search results leading users to pirate sites, copyright owners are all-too familiar with the dual response We can’t and We shouldn’t. This is shorthand for the internet industry’s standard claim that they can’t effectively police their platforms; and even if they could, they shouldn’t because freedom.
But as reported in January 2017, advertising giant Procter & Gamble issued a warning on behalf of global advertisers who spend a combined $70+ billion on digital, announcing that they were no longer willing to accept can’t and shouldn’t as answers to their key complaints. These were a lack of transparency (i.e. independent audit) in measuring the quality and effectiveness of digital advertising; and an inability to prevent brands from supporting intolerable content. So, terrorist recruiting videos on YouTube brought to you by Colgate just isn’t working for the brand managers anymore. Yet, strangely, the internet companies and their bevy of think-tankers have not told these advertisers to stop hating the future and change their business models. (Though I’d like to watch if they did.)
Fast-forward a year and the Wall Street Journal this week reports that Unilever is threatening to substantially reduce its ad buy on Facebook and YouTube if the companies do not more effectively weed out fake news and other divisive content like racism, sexism, and violence. What’s striking about this article is its concluding follow-up report that P&G’s brand officer Mark Pritchard — it was he in 2017, who charged the internet platforms to clean up their act — notes that “progress has been impressive” and that ninety-percent of his demands have been met.
It will come as no surprise to the creative community that, when revenue is at stake, the major internet companies suddenly discover that it is both technically possible and ideologically conceivable to police their platforms a bit more aggressively than they have to date. Artists and creators should follow these developments because the political, social, and financial pressure being exerted on the platform providers can make the companies more vulnerable to potential liability for infringing creative works; and this might make them a bit more cooperative about solving the “unsolvable” issue of mass infringement. By demonstrating a capacity for control (because now they have to), this underscores what should be obvious to most people — that the tradition of shrugging off the interests of rights holders has been a business decision. Period.
No doubt, many “digital rights” activists will prophesy the end of days for democracy in response to this trend toward platform responsibility; but they can take heart knowing that democracy hasn’t exactly thrived under the principles applied thus far. The assumption that all online interactions are protected speech, and that more speech is the only antidote to harmful speech, is still proving to be a destructive fallacy every second of every day. And it turns out the advertisers, whose money pays for these platforms of democracy, don’t accept that the answer to hate-speech and fake news is to just let it ride until our better angels eventually prevail. It turns out this is both bad for society and bad for business. It turns out money talks in Silicon Valley. And if that’s the only way to get internet companies to behave like citizens instead of bullies, then whatever works.