EFF Manufacturing Scandal in the Service of Google

eff-pinnochio2

On October 25, four days after the unprecedented removal of the Register of Copyrights from her office, the Electronic Frontier Foundation released a post on its Deeplinks Blog asserting rather stridently that the Copyright Office never would have reviewed the FCC “set-top-box” proposal if not for the urging of the MPAA.  I think we can now say that there is officially no line EFF will not cross, no lie it will not tell, in the service of Google’s interests over the public interest, which the organization claims to serve.  The thesis of the blog post boils down to the following syllogism:

1. We have argued that the FCC “set-top-box” proposal does not implicate copyright law.

2.  Because we are obviously correct in this view, the Copyright Office should have agreed with us.

3. Therefore, the only explanation for the Copyright Office disagreeing with us is that they must have been pressured by the MPAA.

And so, the EFF went looking for proof of the motion picture industry’s clandestine influence on the Copyright Office via a FOIA request, and they released supporting documents with their blog post that they know most people won’t bother to read.  If anyone does read the super secret emails betwixt FCC, MPAA, the Copyright Office, and the USPTO, they will discover (hold your breath) requests for meetings to discuss issues of concern with regard to the FCC proposal!  Ah ha! Meetings!

I know this may be a shocker, but there is nothing illegal or improper about any stakeholder, operating above board, requesting meetings to discuss concerns they may have with a proposal by any federal agency.  And emails to arrange meetings—I mean literally communications as banal as, “Hey, does next Tuesday work for you?”—are not subject to any rules regarding disclosure because they’re not substantive.  Nowhere in the “exposed” communications presented by the EFF is there any evidence of motion picture representatives drawing conclusions for Register Pallante that she would not have come to on her own with regard to the FCC “set-top-box” proposal.  The FCC proposal, like any other federal agency proposal, allows for comments from multiple stakeholders that become part of the public record and which members of any other agency may read and consider.  It is also neither illegal nor improper for a stakeholder to send an email to a member of an agency to say, “This is our statement for your consideration.”

The broader point is that one does not need to be an expert at the level of Maria Pallante or MPAA’s attorneys to consider that any proposal which fundamentally alters a licensing paradigm between producers and distributors—as the FCC proposal clearly does—is going to have at least some copyright implications.  Had the EFF made a more nuanced argument, that would be one thing, but to assert that the Copyright Office simply never would have entertained a copyright angle without pressure from the MPAA is just an outright lie.  What the EFF doesn’t like is that their position on the FCC proposal is wrong, and so they’ve tried to manufacture a scandal on the heels of Pallante’s unprecedented and bizarrely orchestrated removal from office.  Why?  Presumably, because they know that at least a segment of the public will find the Hollywood-intrigue narrative easier to follow and far more dramatic than the more complex, but less interesting, truth.

On the other hand …

If a hint of scandal is what the reader wants, consider the October 25th notice from the Campaign for Accountability, which asked FCC Counsel to investigate emails between the FCC Chairman Tom Wheeler and Google VP Vint Cerf.  What’s the problem? Unlike innocuous emails requesting meetings, the FCC’s rules require disclosure of ex parte communications that amount to substantive comments on policy.  In its letter to counsel, the CFA cites an April 8th email from Mr. Cerf to Chairman Wheeler expressing his substantive views with regard to the commission’s April 1 notice on protecting consumer privacy within the ambit of the “set-top-box” proposal. In case you’re not following the bouncing ball, Google likes to harvest user data and doesn’t have great track record on the privacy thing.

See what happened there is that a Google executive expressed a relevant, policy-focused comment via email pertaining to the FCC proposal, and the FCC was supposed to disclose the comment and didn’t.  At least that’s CFA’s view.  Whether or not there are more communications of this nature remains to be seen, but against the backdrop of Google’s now well-documented influence throughout the current administration, it’s hard to imagine that anyone is still believing the narrative that “Hollywood” is pulling the strings with regard to the FCC proposal.

Perhaps more significantly is that while the EFF pitches a non-scandal in an effort to erase the copyright implications of the FCC proposal, they seem remarkably unconcerned about those privacy implications, which one would think should to take precedence for an organization claiming to defend consumers in the digital market.  Why?  Assume for the moment that the producers are wrong about the proposal undermining the investment model that creates television shows. That would still leave the privacy concerns with regard to what kind of data Google would be allowed to harvest from the magic TV box it wants to put in your home.  The EFF’s overplayed hand on the copyright issues combined with their silence on the privacy issues related to the FCC proposal suggest that this organization largely cares about one thing:  whatever Google wants.

With Register Pallante Out, What Now?

Photo by apparen.
Photo by apparen.

Creators, copyright advocates, and many policymakers were taken aback by last Friday’s announcement that the new Librarian of Congress Dr. Carla Hayden removed Maria Pallante from the position of Register of Copyrights. The decision was officially described as a reassignment for Pallante to the role of senior advisor for digital strategy, which Pallante has declined.  Associate register Karyn Temple Claggett stepped in as acting Register and the LOC is conducting what it calls a “nationwide search” for a new candidate.

Pallante’s ouster comes just barely six weeks after Hayden’s official start at the LOC, and rights holders and artists are justifiably anxious about the decision itself and the suddenness with which the change took effect.  Meanwhile, anti-copyright voices, who view Dr. Hayden as a fellow ideologue—many librarians are generally critical of copyright—were quick to begin writing the narrative in the twitterverse that their new champion is “cleaning house” by removing a Register who has been the target of organized criticism almost since the day she took office in 2011.

Organizations like the EFF and Public Knowledge, along with several anti-copyright bloggers have at various times accused Pallante of favoring the interests of large, corporate rights holders. This accusation is contrary to Pallante’s actual record and background, but that’s of little consequence in the grand scheme because the ongoing, negative PR campaign was really aimed at copyright law itself. Any Register who believes in the importance of copyright would have been targeted in the same way as Pallante; she just happened to become the face of copyright during a period when the critics amped up their industry-funded rhetoric to a new level.

One of the more concerning aspects of this unprecedented move is that those who are well-versed in Pallante’s background and policy recommendations know that she emphasized the interests of individual authors and the intended social benefits of copyright over the major rights holders whom she is accused of indulging. I cited one example of this in my post about the lawsuit filed by EFF arguing the unconstitutionality of DMCA Section 1201. That the EFF chose to litigate an 18-year-old law at the same time that Pallante herself was advocating policy changes which pragmatically addressed some of EFF’s exact concerns is at odds with the portrait of her as an industry-biased “maximalist.”  That the EFF also happened to file its 1201 suit at about the right moment for it to become one of Dr. Hayden’s first headaches is a curiosity as well.

I think there is little doubt that the organizations and individuals crowing on social media about the defenestration of Pallante see it as a step toward dramatically limiting, or even abolishing, copyright law.  Claims like Public Knowledge’s Friday afternoon tweet that this is, “A great opportunity to bring balance back to the Office’s policy work” are pure spin. Pallante was balanced, often siding with interests other than rights holders; but balance is not the honest goal of organizations like Public Knowledge, which was directly responsible for last month’s hatchet job against the CO when it issued a “report” composed of exaggerations and lies of omission.

Hayden’s Thinking and the Future of the Copyright Office

Whether Pallante’s dismissal represents an ideological split with Hayden on copyright or internal politics or any number of factors that may never be publicly known, the move may indicate that Dr. Hayden envisions taking a more hands-on role in copyright policy than past Librarians have done. If so, this whiplash firing of the Register ought to raise at least a few congressional eyebrows in context to the now three-year-long discussion on the prospect of separating the Copyright Office from the LOC.  Pallante’s unexpected and unprecedented removal could serve to emphasize for lawmakers the rationale for that separation. In essence, although the new Librarian has acted within the scope of her authority, this decision and its timing may reflect a significant lack of deference for the historic basis of that authority.

As explained in a previous post, the Librarian of Congress is not, and has never truly been, the nation’s copyright expert. The formation of the Copyright Office within the Library in 1897 came about as a largely functional relationship between the registration of works and the need to grow the collection.  It wasn’t until the turn of the 20th century, particularly with the passage of the 1909 Act, that copyright law started to become as complexly woven into the American economy as it is today.  As copyright law evolved along with the expansion of technology, the Register increasingly served as a national advisor on policy—well beyond oversight of the original registration function.

While the copyright registration and deposit process remains a major source of material for the Library’s collection; with the advent of digitization, there arises a natural tension between a Librarian’s ambition to make the collection accessible online and the Register’s responsibility to see that rights holders who deposit copies with the LOC remain protected. For instance, if the LOC were to make full-length, high-quality works available online for free, this would clearly harm the licensing opportunities for those works; and this, in turn, would dissuade authors from registration and deposit. In this context, it is noteworthy that Pallante was “offered” the “digital strategy” job.  As one knowledgeable colleague, speaking on background, suggested, “This is a firing in disguise, offering Pallante the one job she would be inclined to tell the Librarian she cannot do.”

So, although there remains a practical relationship between the registration process and the Library collection, there is no hard-and-fast reason, especially from a policy perspective, why the Register must continue to operate under the purview of the Library. In fairness, Dr. Hayden may not be the anti-copyright ideologue that folks at EFF, PK, et al assume her to be, but the timing and tone of Pallante’s removal has unquestionably been viewed as a slap in the face to creators.  If indeed that slap is a signal that Hayden considers herself the new “copyright sheriff in town,” that could easily trigger both rights holders and members of the Judiciary Committee to decide that indeed it is time for these two very different authorities to operate independent of one another.  After all, balance is what we get when opposing but equal forces are required to work together.

AI & Ethical Determinism

Photo by moipokupkigmailcom.

As artificial intelligence (AI) moves from the realm of science fiction to everyday science, leading  technologists and scientists are asking themselves questions about how to write an ethical code. The most widely reported ethical dilemma involves the self-driving automobile and what’s known as the Trolley Problem.  This hypothetical challenge asks whether or not you would make the decision to divert a speeding train from one track to another knowing that doing so will kill one person but save the lives of several others.

This problem is transposed to the world of driverless vehicles and how a car’s AI should make split-second, life-and-death decisions. It’s a challenge that doesn’t just raise technological or ethical questions, but psychological ones as well.  Because the ways in which we cope with tragedy—or the anticipation of potential tragedy—in modern, developed society does not generally encompass the kind of determinism implied by AI.

Many cope with tragic accidents through faith—a belief that there is a deity with a plan, even if that plan cannot be known.  Those of us who are not religious cope without faith in a plan—by making peace with the fact that chaos and human fallibility can produce terrible outcomes.  In either case, there is a degree of comfort in the uncertainty—not comfort that lessens the pain of loss per se, but comfort that enables us to rationalize terrible events or to step outside the door without living in abject fear.  The uncertainty coupled with probability, and maintaining a measure of control, allows us to confidently get into our cars and drive around without expecting to be wiped out.  Remove a factor, for instance the measure of control, and this likely explains why more people are anxious about flying than riding in cars despite the statistics proving that the opposite should be true.

In this regard, when the unforeseen event occurs—brakes fail, an obstacle suddenly appears in the road, etc.—the outcome of the split-second reaction of a human driver is arguably more the result of chance than of any kind of rational decision-making. Even in relatively common examples, drivers are told that they should almost never swerve to avoid wildlife running across the road and risk hitting an oncoming car or slamming into a tree and killing themselves rather than hit a squirrel. But that instinct to avoid is strong and gets stronger when the animal that suddenly appears is a stray cat or dog.  The point is that whatever the outcome—whether a flattened squirrel or several dead motorists—the whole series of events, including the driver’s reaction, can be chalked up to a degree of uncertainty, and in that uncertainty lies many of our psychological coping mechanisms.

But what happens when humans pre-determine the outcome of certain ethical dilemmas and encode these into machines that we then grant authority to make these decisions?  In the simple case cited above, the squirrel is killed and all humans live, but what about a split-second decision that will result in the death of a passenger versus the deaths of a mother and baby crossing the road?  In the Summer of 2016, MIT researchers, grappling with exactly these types of questions, launched a website called Moral Machine which asks users to make a set of lose-lose decisions in various hypothetical traffic scenarios in which some parties face certain death.  Anyone can take the “test”, and the site will reveal how you “score” relative to the ethical decisions made by others.

Of course, the Moral Machine tests present the user with information that a car’s AI would, in principle, never know—like the fact that some of the potential victims are criminals.  But in a more likely scenario, age is a factor in some of the scenarios, a condition that strikes me as being more credible—that a vehicle might know it’s carrying a septuagenarian couple and may, therefore, decide that it is more ethical to kill them rather than a young family. The senior couple might even make the same selfless decision themselves, but such calculations don’t really occur when a human operator is reacting to an emergency faster than he can think.

What’s eerie about some of these Moral Machine tests is the implication that the data set used to enable an AI to make ethical decisions could theoretically include more than mere numbers (i.e. that the machine would simply default to save more lives than it takes).  Age could be a factor, but what about net worth or relative “value” to society?  Does the AI wipe out a whole busload of kids to save one physicist or surgeon or even a Kardashian? What about race or sexual orientation? This then begs the question of whether or not these pre-determined decisions are public knowledge or trade secrets, both of which present huge and unprecedented moral dilemmas.

In this regard, an article that appeared just a few days ago tells us that the next generation of self-driving cars from Mercedes-Benz will be programmed to save the passengers regardless of circumstances.  On the one hand, this is an algorithmic variation on the theme that the privileged class enjoy a priority lane on the right to life ahead of everyone else; but there is also something to be said for Mercedes choosing not to become trapped in the moral miasma of programming an ethical AI. Perhaps if all vehicles are required by law to default to a single directive like try to save the passengers, then this would approximate the instinctive but fallible reactions of human drivers and still allow uncertainty to play a role, thus absolving engineers of the responsibility to “play God.”  At least until the AI becomes self aware and begins to make such decisions on its own.

After all, it’s hard not to notice the dystopian implications of a man-made, ethical determinism when we remove the element of chance and cede authority to carry out life-and-death decisions to machines.  When we remove the psychological buffer provided by chance, fate, God’s will, etc., then tragic events naturally beg explanation and, therefore, an instinct to assign blame. This of course raises the companion question about those who would inevitably try to game the system, people who would “jailbreak” their vehicles to override any code that might not favor them as the chosen survivors of an accident.  Suddenly, this places the civil libertarians who complain about the “right” to tinker with technological property on the wrong side of that argument insofar as the theoretical “greater good” is concerned.

The ethical AI question also becomes another factor leading to the conclusion that autonomous vehicles might not be private property for very long. Rationally, to have an AI shuttling all of us hither and yon, the system would have to be level in order for it to be remotely ethical, suggesting that the private property models from Mercedes or Tesla or Ford are merely stepping stones toward a public system, or a highly regulated one. But these outcomes are not what manufacturers or leading data companies investing in this future are going have in mind.

This is one reason I agree with President Obama, when he said in a recent Wired interview with Joi Ito conducted by Scott Dadich, that it is essential that public funding play a role in the development of AI.  “…part of what we’re gonna have to understand is that if we want the values of a diverse community represented in these breakthrough technologies, then government funding has to be a part of it. And if government is not part of financing it, then all these issues that Joi has raised about the values embedded in these technologies end up being potentially lost or at least not properly debated,” said Obama.

Of course, the president is referring to developing an ethical AI beyond just vehicles, and his point is well taken.  The sci-fi future of AI is already here.  But the questions as to what values drive the decision-making are just barely being asked in the public debate. Meanwhile, the corporate rhetoric of “disruption” has already absolved many sins in the areas of privacy and intellectual property infringement.  Or as Sam Kriss put it so well in his excellent article for The Atlantic:  “Silicon Valley works by solving problems that hadn’t heretofore existed; its culture is pathologically fixated on the notion of ‘disruption.’ Tech products no longer feel like something offered to the public, but something imposed: The great visionary looks at the way everyone is doing something, and decides, single-handedly, to change it.”

NPR reported in late September that computer scientists from the major tech giants—IBM, Apple, Amazon, Microsoft, Facebook, & Google formed The Partnership on Artificial Intelligence to Benefit People and Society.  One of the goals of the partnership is to develop standards for best practices in the field of AI, including tackling ethical questions like the Trolley Problem for vehicles.  But it is essential that the public interest be represented in the development of these technologies, and as much as I may fault the Obama administration for being too Googley in various policy areas, I also credit the president himself for apparently thinking deeply about questions like how we develop an ethical AI. At the present rate of development and investment, let’s hope the outgoing president is not the last public representative to keep this conversation in the foreground.