Masnick Calls CASE a Big Media Bill?

From the Techdirt Sycophants Department

In his post of May 28, Mike Masnick dutifully opened his hymnal and joined the chorus in a rendition of “How to Criticize the CASE Act,” lending his bel canto to the refrain that the new law would create a “copyright trolling court.”  As explained here and here, this is an inscrutable criticism because the Copyright Claims Board will actually be a lousy venue for copyright trolls—principally because it is a voluntary resolution option.  But if you don’t believe me about that, Mike’s further implication that CASE is a Big Media proposal and the product of “soft corruption,” is so transparently illogical that you may dismiss the allegation by applying a modicum of common sense.

Wanting readers to believe he speaks truth to power, Mike employs a little misdirection with the following innuendo about two of the bill’s lead sponsors:

“We should note, that the House bill is sponsored by Rep. Hakeem Jeffries, along with Jerry Nadler. You may recall that those two Congressman were recently seen hosting a giant $5k per ticket fundraiser at the Recording Industry’s biggest party of the year, the Grammys. And, right afterwards, they suddenly introduce a bill that will help enable more copyright trolling? Welcome to the world of soft corruption.”

Yes, that’s what happened.  The CASE Act was drafted on the back of a napkin at the Grammys party. (Stand by for Mike to accuse me of straw man because he did not literally say this.)

In Reality Land, I suppose we can ignore the fact that a small claim copyright proposal has been floating around Capitol Hill longer than Rep. Jeffries has been a Member of Congress—and, for that matter, longer than the bill’s other main sponsor Rep. Doug Collins of Georgia.  But I guess Collins wasn’t at the Grammys and so doesn’t fit Masnick’s conspiratorial narrative?  We might also ignore the fact that CASE has solid bi-partisan support, even from Silicon Valley Rep. Zoe Lofgren, and that the only effective (albeit unreasonable) opposition in the last two or so years has come from the Internet Association and the Computer and Communications Industry Association.  But what readers should not ignore is their own basic ability to reason, which ought to sound something like this …

BIG MEDIA COMPANIES DON’T GIVE A DAMN ABOUT COPYRIGHT SMALL CLAIMS.

Mike’s implication that Jeffries and Nadler partied with the RIAA and “suddenly” introduced a bill is just wrong as a matter of public record, but even if nobody wants to bother looking that up, you might then ask what possible interest major record labels or movie studios or any other Big Media companies have in creating a voluntary, small-claim, alternative-dispute provision for copyright infringement?  As Mike himself is very fond of reminding people, these are powerful corporate entities with high-octane attorneys on staff.  There is nothing in the CASE Act for these companies.

I know it’s hard to fathom, but the CASE Act is a rare example of bi-partisan legislation designed for regular people—middle-class creators who have almost no affordable path to remedy unlicensed uses of their works.  And thanks in no small part to tech-evangelists like Techdirt, online infringement is both rampant and misconceived as acceptable, even by commercial users who ought to know better. 

Mike should go back through all the articles and public statements he’s ever made on the theme that he “supports creators” but wants “balanced copyright” and feel obliged to eat every one of those words.  CASE is about balancing copyright.  It proposes to level the playing field for little guys who are getting clobbered by the policies and practices of the tech giants, which only makes Mike’s implication that it’s a Big Media bill all the more offensive.  I know attorneys who think CASE might not work, which is at least thoughtful criticism based on its actual mechanisms, but misrepresenting the Copyright Claims Board as a processing center for invalid damage awards is just mean-spirited considering the kind of people it is designed to help.

At this point, it would be grand if Mike and the legal pundits who write the songbooks from which he so often sings would just admit they don’t like copyright and will vigorously oppose any kind of enforcement no matter what.  That would at least be honest.  Still obnoxious, but not patently absurd.

On New Models, Journalism, and Digital Advertising

It was encouraging to see our most prominent millennial Member of Congress, Rep. Ocasio-Cortez (D-NY) recognize the link between a healthy democracy a professional class of journalists. On Friday, presumably in response to the startling number of layoffs at BuzzFeed, @AOC tweeted this:

True to form, Mike Masnick of Techdirt replied:

It is ironically quaint at this point to see anyone, even Masnick, still using the “buggy whip” metaphor.  I mean could the term beat a dead horse be any more appropriate?  The buggy whip was always a stupid reference because horse-drawn vehicles are, in fact, obsolete, while the content that big tech companies exploit and devalue (like journalism) is clearly still very useful and in demand.  

Several years ago, the “adapt to new models” narrative was just dumb magical thinking.  But today, we have ample evidence to call this talking point a demonstrably failed proposition.  I guess it’s good that Masnick did not suggest journalists should tour, sell merch, or find new ways to connect with their fans; but still, Mike should go lie down by his dish and think about what he’s done.  

There may be new models in the sense that we enjoy new ways to access and experience content—be it news or entertainment—but there are no truly novel economic models to support the production of content in a free market.  The revenue needed to pay reporters, writers, etc. comes from consumers or it comes from advertisers.  Everything else is alchemy.  And while there are certainly many other factors external to Facebook and Google that have changed the nature of journalism and our relationship to it, the market reality for news and other content creators is that the major internet companies systematically poisoned both revenue streams.

First, the industry laid siege to the principles of copyright and promoted a faux-populist (frankly childish) message that all content must be free.  Then, they helped fulfill the promise of free by erecting giant tollbooths that siphoned off the lion’s share of the available ad revenue, which would otherwise go directly to content creators like journalists.  It’s funny that the free-content, anti-copyright crowd tend to mock as anachronistic any news organization that would presume to put up a paywall, but that’s exactly what Facebook is—a paywall.  No, we don’t pay to use it, but the content creators pay with the lost revenue they rightly earned.

It is especially funny (or sad) that Masnick would bring out a variation on the adapt message in context to BuzzFeed, which IS a new model.  It was built as an online-only platform that would be free to consumers, and it was designed with social media in mind.  Yet, as the New York Times reports, founder Jonah Peretti believes the solution to the Facebook/Google problem may be a merger of several digital news networks into a group that can negotiate better terms for ad-revenue sharing.

But, again, notice how there’s no “new model” there.  It’s just an old model called advertising now dominated by two massive companies.  And the fact is that news media companies have adapted, although in the ever-changing landscape of platforms like Facebook, it is probably more accurate to say that they have reacted in ways that are of little value—economic or social—to the purpose of journalism.

In October of 2018, Alexis C. Madrigal and Robinson Meyer, writing for The Atlantic, reported that several news companies laid off dozens of reporters, mostly writers, to make room for video production resources in an effort to capitalize on Facebook’s new video initiative.  Citing a lawsuit pursuant to Facebook’s allegedly misrepresenting the data on video impressions for advertisers, the authors write…

During the period of purported wrongdoing, from July 2015 to June 2016, journalists and newsroom leaders across the country worked to cover an unprecedented presidential campaign in an information landscape that Facebook was constantly, and erratically, transforming. Even if, as Facebook argues, it did not knowingly inflate metrics, it set up new and fast-changing incentives for video that altered the online ad market as a whole. 

So, even if adapting to video had proven remunerative for news companies, this is still not a good environment for journalists, or for the public that relies on their work.  News organizations should focus on doing the best job of reporting the news, not figuring out how to navigate the opaque and erratic landscape of Facebook.  As I say, that’s not adapting, it’s reacting; and that same Atlantic article cites one example that makes this point.

There is something seriously flawed in the narrative that BuzzFeed potentially broke an important story this month about Michael Cohen’s testimony and then had to decimate its national news team last week–but that, in 2016, they spent resources making a viral video featuring two employees exploding a watermelon.  That is adapting to new models? Hard-news supported by an old Gallagher joke?  And it didn’t even work.  “BuzzFeed never repeated its success,” write Madrigal and Meyer. “But that didn’t stop reporters from being taken off the line of duty, while a promotional video of water being poured on permeable concrete racked up 100 million views.”

Meanwhile, as intermediaries collect the ad revenue that content creators like journalists generate, the advertisers themselves may be getting a raw deal themselves.  Facebook’s allegedly fraudulent reporting of video-view metrics is consistent with other evidence suggesting that trouble in the digital advertising market may be far from over.  As cited in a recent post, Max Read of New York Magazine tells us that a staggering amount of the internet, at any given moment, may be fake.  Read writes …

Studies generally suggest that, year after year, less than 60 percent of web traffic is human; some years, according to some researchers, a healthy majority of it is bot. For a period of time in 2013, the Times reported this year, a full half of YouTube traffic was “bots masquerading as people,” a portion so high that employees feared an inflection point after which YouTube’s systems for detecting fraudulent traffic would begin to regard bot traffic as real and human traffic as fake.

What all that means for advertisers, of course, is that they’re not getting the impressions they’re paying for, let alone the quality impressions digital ad sellers continue to promote. If this is the case, it implies that another reckoning may be at hand between the major advertisers and Facebook and Google.  Wouldn’t it be interesting if the solution for both advertisers and news organizations is that the brands return to buying more media from the news sites themselves rather than the intermediaries?  Yeah, I know.  It’s an old model.  But it worked pretty damn well.


Robot image by frescomovie

Platforms Wrestle With the Difficult After Years of Ignoring the Easy

A new, in-depth post by Mike Masnick at Techdirt correctly describes many of the challenges inherent to platform moderation of content. It was enough of a departure from his usual “anything goes” stance that he wrote a preamble acknowledging that he was likely to piss off a few readers. And it is, admittedly, a little bit fun to watch some of the web cheerleaders stumble these days as they try to walk back the utopian view that all content online is fundamentally free speech and that removal of anything is inherently censorship.

Now that the public conversation is less comfortable with “free speech” as a universal answer—beginning with Facebook taking money for political ads made by Russian agents—Masnick et al have little choice other than to engage in a more nuanced dialogue that at least begins with the premise that some platform responsibility is worth considering. His post highlights a few possible solutions to “bad” content, including his own proposal; and while I think he correctly describes the complex nature of content moderation by administrators, I’m not sure any of the solutions cited address the real problem. His highlights include the following:

Yari Rosenberg recommends counterprogramming, which is essentially responding to misinformation with facts at the point of user interaction. Tim Lee advocates down-ranking less credible sources that appear to be news. David French proposes that the platforms only remove libel and slander because these don’t require new legal definitions. And Masnick proposes that, for instance, Facebook abdicate its centralized control over filtering or adjusting its algorithm and instead cede that power to users to set parameters for what they want to see.

“And, yes, that might mean some awful people create filter bubbles of nonsense and hatred,” Masnick writes, “but average people could avoid those cesspools while at the same time those tasked with monitoring those kinds of idiots and their behavior could still do so.” To me, this statement implies that Masnick’s “protocols” solution is largely cosmetic, that it may result in us “average people” not seeing as much garbage, but it in no way alters the underlying model of “surveillance capitalism” and merely papers over the social disease whereby garbage continues to gain undue support and have undue influence in the mainstream. (This was discussed in my last post about the paper by Alice E. Marwick on why we share fake news.)

When YouTube and Facebook shut down the accounts of conspiracy nut Alex Jones’s Infowars last week, doubtless some cheered, others cried foul, and others warned that attempting to silence even the outrageous wack-jobs can turn them into martyrs and galvanize their cult-like followers into an even larger mob. This prediction is almost certainly correct and, thus, points to the real question I have, which is not whether Facebook should keep Jones off my feed to avoid offending me, but why so much outright garbage information is currently playing such an outsized role in the social and political narrative of the United States?

I can see how some of the solutions Masnick mentions, including his own, might diminish some of the low-level sharing of junk news by “average” thoughtful people, but none of these proposals tackles the big social phenomenon itself — that the internet has been the catalyst for elevating toxic misinformation to an unprecedented level of tangible influence. The crazies who used to be conveniently segregated by geography (the proverbial idiots in every village) can now coalesce in cyberspace, finding strength in numbers, reinforcing their “deep stories,” (to use Alice Marwick’s term), and taking tangible action in the streets or at the polls.

So, while the tech pundits and the internet companies look for (or pay lip-service to looking for) technological responses to these social ills, the underlying reasons why we are suddenly reacting to “bad” content and putting pressure on the major platforms may not actually be addressable—either by the companies simply removing content or by public policy that attempts to parse hate speech and other highly-subjective concepts.

Masnick is not wrong that the task of editing speech by the platforms is extremely difficult, which is presumably the main reason he advocates putting that control in the hands of users. As I say, I’m ambivalent about this approach because I think the end result will be the same—increased credibility for outright crazy shit via one portal or another. If there is an antidote to that problem, I strongly suspect it is not technological but human. But, at least even the tech-utopians now have to acknowledge that treating all online content as sacred has had some very negative consequences, so perhaps we can now have a different discussion about content that would not be protected speech in any context.

For those of us who have advocated platform responsibility for quite some time, it is amusing, if not frustrating, to watch the industry wrestle with the truly difficult issue of moderation after years of refusing to compromise on the comparatively simpler issue of removing material that is patently illegal. For instance, weeding out material that infringes copyright, or which a court has held to be libelous or otherwise harmful to a claimant, is much easier than deciding when it’s okay to remove or demote “bad” speech. Yet the major platforms, along with considerable help from opinion-makers like Masnick, have historically responded to the proposed removal of unprotected or illegal content as a prelude to “rampant censorship” and the destruction of all that is beautiful about the internet.

This recent shift in posture implies two things in my view: the first is that the platforms can indeed be more cooperative in responding to illegal content without damaging the benefits of the internet; and the second is that those benefits have never been all they’re cracked up to be. Admitting to the latter would go a long way toward reframing a more rational discussion about the former.