Is Google Buying Policy Through Academia?
Image by nicholashan
This week, the Wall Street Journal reports that Google has been funding academic research papers worldwide and, unsurprisingly, the conclusions in these papers tend to support Google’s policy interests. This is familiar territory of course. Most obviously, we remember that Big Tobacco funded all manner of “research” that produced alternative facts about the health hazards of smoking. This is not to say that every author or study implicated in this story represents poor scholarship, or a quid-pro-quo scenario; but the sheer volume alone likely has a considerable effect on policy.
Perhaps the most significant question to consider what happens when the industry that bankrolls self-interested academia happens to be in the information business. Because not only does Google have the financial resources to fund millions of dollars worth of studies, but they also own the most pervasive platform we use to find information. So, what are the odds a citizen will yield Google search results linking to news articles that cite Google-funded studies that support Google’s policy views? Maybe a little too likely.
In fact, The WSJ cites a report conducted by the Campaign for Accountability, which identified over 300 papers, published between 2005 and 2017, on the issues of anti-trust, intellectual property, and general regulatory policy. “The 329 Google-funded articles that we identified were cited nearly 6,000 times in more than 4,700 unique articles,” according to the report summary. “Overall, our analysis suggests that Google is using its sponsorship of academic research, not to advance knowledge and understanding, but as an extension of its public relations and influence machine.”
To be sure, the report pulls no punches, calling Google’s capacity to influence policy through academia “pernicious.” “The number of Google-funded studies tended to spike during moments when its business model came under threat from regulators—or when the company had opportunities to push for regulations on its competitors,” write the authors. The CFA report further states that the majority of the views in these papers are consistent with Google’s policy interests and that two-thirds of the studies did not disclose that Google had been a source, or the source, of funding.
Of the total number of papers listed, roughly one third (114) address the subject of copyright; and several of the authors—Ammori, Band, Springman, Urban, Lemley, Heald—have been consistently cited by anti-copyright bloggers, organizations, and the mainstream press, in articles promoting the general message that copyright is outdated, broken, draconian, or just plain wrong for the digital age. Still, it would not be fair to many of the authors of these papers to conclude from this report that every name on it is a so-called Google shill. In fact, that’s exactly the kind of ad hominem generalization employed by the anti-copyright crowd all the time, and it’s not a reasonable response.
This point brings to mind one example of how effective this academic funding can be, and I refer back to March of 2016 and the request for public comments to the USCO regarding possible revision to Section 512 of the DMCA. In the final weeks leading up to the April 1 deadline, stories broke in both the blogosphere and in the mainstream press with headlines announcing that “30% of all DMCA takedown requests are questionable.”
The source of those headlines was a study (listed among CFA’s 329) from Berkley and Columbia, co-authored by the above-named Jennifer Urban. In fairness to Urban (who is very nice) and her colleagues, that study did not actually say what the careless reporting claimed it said. As discussed in detail in this post, the study did not support anything close to justifying the provocative 30% headlines that had gone viral. In fact, readers can see that when Urban herself wrote a few very cordial comments in response to that post, she did not really take issue with the overall thesis that the press and bloggers had misrepresented her study’s conclusions.
As a result of that study, though, reporters wrote stories based on the following logic: the big rights holders send tens of millions of automated takedown notices; this new study says 30% of notices are questionable; the major rights holders must be sending millions of questionable notices; therefore, the major rights holders must be stifling a lot of speech. Except the report itself doesn’t support that narrative at all, and one cannot accuse its authors of making such a claim—because they absolutely did not. In fact, notices sent by major rights holders were not even part of that study’s data set.
But this one example of one study did produce some very effective—and innacurate—headlines that were probably rather helpful to Google’s interests leading up to the USCO’s hearings on Section 512. Odds are, most tech and copyright reporters didn’t read the whole report (certainly didn’t try to unpack it’s findings); so by the time their misleading conclusions became tweets and other blurbs, a biased narrative about the DMCA was being repeated that even the report itself, in some areas, contradicts.
I chose this example specifically to illustrate that authors of a report, even while industry-funded, may still apply reasonable academic rigor and simultaneously produce results that can be very useful to the funding industry—especially when conclusions are taken out of context. The extent to which the authors of a particular report can be blamed for finding the results their funding industry is looking for can only be considered on a case-by-case basis. Usually, the scholarship, or lack thereof, speaks for itself; but this demands that the people who do the reporting about the reports actually read them and try to understand them. And in the digital market, ain’t nobody got time for that.
So, ultimately, the smoking gun in this particular story may be one of volume. The company or industry that can afford to fund a lot of academic studies will invariably yield the most results favorable to its interests. Some will be, as the CFA says, “…little more than thinly veiled opinion articles dressed-up as academic papers, outlining the beliefs of an author on Google’s payroll with little or no supporting evidence.” But even if all 114 papers mentioned were thoughtfully critical of specific areas of copyright law, and then supported 2,000 half-baked articles that in-turn generated 20 million tweets, it stands to reason that, in this grand game of Telephone we’re playing, the general public winds up getting the gist of exactly what Google wants them to believe.
© 2017, David Newhoff. All rights reserved.Follow IOM on social media: