Yunghi Kim puts real skin in the game.

In a Thanksgiving announcement, veteran photojournalist Yunghi Kim created ten $1,000 grants to be awarded one time to ten photographers.  The $10,000 kitty fueling this generous gift came from awards to Kim paid by copyright infringers of her work, and she states her reasons for creating the grants thus:

“I am doing this to emphasize the importance of copyright registration of your work and as a way for me to give back to the profession of photojournalism, an industry that I love and I am proud to be a member of for more than 32 years.”

It’s hard to miss the significance of Ms. Kim’s use of this money in contrast to Google’s recent announcement that it would pay legal fees IF certain YouTube video creators happen to find themselves in lawsuits resulting from DMCA disputes. At best, that announcement is an empty PR gesture; and at worst, it’s a $400-billion-dollar gorilla potentially menacing independent rights holders.  Meanwhile, ten grand is real money for a photographer–real money for most people–and the fact that Kim decided to do this with it, and to emphasize the need for creators to protect their work, is an encouraging start to the week.

See the full story at Petapixel here.

Posted in Copyright, Photography | Tagged | 1 Comment

Red Flags, False Flags, & The Virtues of Ignorance

A couple of weeks ago, I wrote this post about an amicus brief filed on behalf of several Internet companies seeking a new ruling in a 2004 case with the apparent purpose of changing the legal standard applied to the “Dancing Baby” case. This is in regard to the burden on a rights holder to “consider fair use” before issuing a DMCA takedown request.  I won’t rehash that post, but a colleague of mine suggested that the amicus brief itself contained a rationale so heavily salted with hypocrisy that it deserves its own attention.  And he’s right.  The amici making the case for an “objective” rather than a “subjective” standard in considering fair use sum up their concerns (as cite in the other post) thus:

“… the more misinformed or unreasonable the copyright owner, the broader the immunity he would have from liability under Section 512(f). This reading of 512(f) would effectively encourage copyright owners to remain ignorant about the limitations on their exclusive rights under the Copyright Act, see 17 U.S.C. §§ 107–123, because the less they know, the more leeway they would have to send takedown notices.”

Consider this rationale for a moment because it should sound very familiar.  The less the rights holder knows, the greater his/her immunity from liability; it must, therefore, be in the rights holder’s interest to “remain ignorant.” Hence, the the “subjective” standard rewards ignorance (i.e. non-specific knowledge) with its release of responsibility.  The other post I wrote focuses on the inherent subjectivity of fair use, but the nature of this expressed concern—that ignorance is a shield from liability—is an astonishing worry coming from the industry whose favorite defense can be summed up as the “we can’t know” defense.

As cited in this post, the Internet industry has relied frequently on splitting hairs between what is often called “red flag” (non-specific) knowledge and “actual” (specific)* knowledge, the former definition being one of the ISP’s preferred defenses against taking action to mitigate any harm being done via its platforms.  In case after case, Internet providers—most often Google—will argue that absent “specific knowledge” of wrongdoing, they are not responsible for delisting, blocking, demoting, or removing links to sites or files that are causing some type of harm. In particular, these providers have consistently argued that they bear “no duty to monitor” activity on their sites in order to remain shielded by safe harbor provisions in the DMCA. So, this sounds an awful lot like the ISP has the same type of vested interest in its own ignorance that is supposedly detrimental according to the excerpt from the amicus brief cited above.

In fact, the “concern” raised in this amicus brief is truly insidious when you dig below the surface. In a nutshell, big, wealthy corporations—whose business by the way is data management—are arguing that an individual rights holder should be expected to have “specific knowledge” about a purely subjective doctrine (fair use). Meanwhile, the big, wealthy corporations (and did I mention their business is data management?) can never be expected to have “specific knowledge” about activity on their platforms that is—quite often—objectively knowable.  To up the ante, the big, wealthy data management corporations claim to be voicing their “concern” for the sake of public interest, and at least some portion of the public is inclined to believe them.  Finally, just for laughs, the claim of ignorance employed by these corporations is typically not argued as a defense against civil or criminal liability, but more often simply to avoid playing a leadership role in helping to make the Internet a place that supports fair trade, honest dealing, and safe commerce that protects both consumers and producers in a healthy marketplace.

Meanwhile, the most likely abusers of DMCA takedown—certainly the ones we should be most concerned about—are public figures, corporate entities, or government agencies  that might seek to misuse copyright in order avoid criticism. But these same entities are also more likely to have “specific knowledge” of what they’re doing than, say, an indie musician who can be forgivably unclear about the fair use doctrine as it might apply in a creator-to-creator use.  Plus, the musician’s potentially wrongful takedown is not going to chill free speech, particularly when there are already non-litigious remedies for such errors contained within DMCA procedure. At the same time, the service provider (e.g. YouTube) is shielded from any liability to both the user and the rights holder because, as they have argued repeatedly, “they can’t know everything their users are doing on their platforms.”

But in an interesting turn of events this week, Google seems to have shed at least one of its seven veils of ignorance and demonstrated that it has rather specific knowledge of the doings of some of the video creators on its YouTube platform.  Their interest in these creators is so specific, in fact, that the search giant has offered to pay the legal fees, if necessary, for a handful of these video creators, who may face legal proceedings stemming from DMCA takedown disputes.  Cecilia Kang for The New York Times describe’s Google’s motivation thus:

“The company said it wanted to protect free speech and educate users on fair use. But its announcement is also is aimed at strengthening loyalty with video creators. YouTube faces new competition from Facebook, Twitter and traditional media companies that are trying to get consumers to upload more content onto their platforms.”

I’m sure Kang is right about the competitive strategy, but we’ll set that aside and focus on Google’s new “we got your back” PR move that may look bold to some on the surface, but is actually rather craven and slick if you consider the details.  For instance, the lead example of a video creator cited in Kang’s article whom Google has chosen to support represents something of a false flag for the “cause.”  Constantine Guiliotis is the creator of UFO Theater on YouTube, a series in which he rather amusingly debunks amateur, hoax videos (from other YouTube Channels) claimed as evidence of alien spacecraft. His use of these videos would constitute a fair use, though Kang’s article states that Guiliotis has only received three DMCA takedown requests to-date, which is penny-ante poker in the world of DMCA.  The article does not state whether or not Guiliotis filed counter-notices to restore the use of those three videos, but he certainly had that option, and that would be the end of any likely conflict.

The reason I say that would be the end of it is because the notice and counter-notice procedures in DMCA are meant to serve parties like Guiliotis and the video makers whose works he uses because these people are not going to engage in hugely expensive federal lawsuits over these relatively minor disputes. Hence, Google’s taking a stand by telling Guiliotis, “We’ve got you covered if one of those amateur UFO hoaxers decides to sue you,” is an absurd and empty gesture.  I don’t know all of the YouTube creators Google has decided to “back” in this initiative, but the announcement smells like a PR move designed to make Google look like a champion of free speech while throwing its weight around to intimidate smaller rights holders who can barely defend themselves in the online market in the first place. After all, what if an individual or small independent rights holder has a legitimate claim of infringement by a YouTube video creator?  Is this rights holder now up against the financial might of Google? And if so, what does this say about Google’s supposed neutrality stemming from its claim of ignorance shielding it from liability to both parties in such disputes?

Between the hypocrisy in the amicus brief cited above and the way in which Google is leveraging its corporate muscle in this recent announcement, it looks an awful lot like their idea of “educating people about fair use” is more akin to indoctrinating the public toward a concept of fair use reshaped as the Internet industry sees fit.  And it could work.  Fair use is not a legal defense most people need to concern themselves about; and misconceptions about its application abound. So, Google and its cronies could succeed in sowing a general perception that if a work is used on YouTube, etc., it should be presumed fair; and just in case the individual rights holder has any doubts, crushing litigation will happily clear it up for him.  Sounds progressive, no?

*See comment from Anonymous regarding technical distinction between “red flag” and “actual” knowledge.

Posted in Digital Culture, Law & Policy | 4 Comments

Comics Under Copyright

Recently, I’ve spent time on Netflix catching up on nearly all the TV series based on various Marvel and DC Comics properties.  By and large, in their own context, these shows are very good; and in some regards, they’re exceptional. Certainly, the overall quality of these programs is consistent with the general renaissance of the small screen that has taken place over the last decade or so, but I was paying attention to these derivative comic-book series in particular because the characters that belong to Marvel and DC are often cited as the type of intellectual property that should have long since devolved to the public domain.  The feeling among some of those who advance this view is that these classic characters and story elements are so ingrained in our cultural consciousness that they have attained a status akin to oral tradition or mythology and, as such, now belong to the commons.

In part, I suspect this sense of collective ownership is inherent to being an ardent fan of anything that has attained institutional status.  Much like the armchair quarterback “knows best” which play to call on Monday night, the serious comics fan can feel rather personal about narrative choices made with “his” characters. This is interesting in itself because it’s a sentiment that doesn’t really seek collective, or public, ownership so much as it implies an individual, I know better relationship to the works.  And these strongly held feelings may serve to aggravate the complaint that, most especially, corporate conglomerates should not be the owners of these properties.  Interestingly enough, though, while anyone may quarrel with a narrative choice made by any author(s), the overall craftsmanship of the TV series in question may not exist absent major media corporation ownership of these comic-book properties.

Watching several of these shows all at once inspires a variation on a thematic question I’ve asked many times about the idealism of the public domain, which is this:  These properties should be in the public domain so that “the public” may do what with them?  If ten years ago, Hulk, Captain America, Batman, Flash, et al had entered the public domain, what would the public get, either culturally or economically out of the transaction?  Because one thing it would not get are high-production-value TV shows like Gotham or Agents of Shield.  These programs are simply too expensive to be produced by any entity other than a fairly large organization that would never invest absent the underlying intellectual property rights.  And as I’ve pointed out in this post, even if one hates these shows, the larger economic benefits of major TV productions should not be dismissed.

Certainly, if Marvel and DC properties were in the public domain, then individual authors or comics artists could publish their own variations, and indie filmmakers could perhaps make works on a scale much smaller than even a single episode of a show like Gotham. But it seems to me we could also dilute both the commercial and cultural value of these properties rather quickly.  While these characters have been “rebooted” many times, I believe that part of what makes the reboot work (i.e. the ability to recycle characters without depleting their value) is centralized creative control over the universe of interrelated characters and plot lines for a period of time.

In this regard, fans of the Avengers films can follow the exploits of the Agents of Shield, which is set in a period shortly after the events depicted in the last Avengers feature film The Age of Ultron. Presumably, if any of the main characters from Agents appear in the next Avengers film, the stories will align, and this is only possible with centralized control over the Marvel universe. Plus, it seems to me that this is entirely consistent with the tradition of comics, whereby the fan can follow a variety of intersecting stories for a period time to some conclusion, leaving the stage bare for the next reboot.

One can argue that this doesn’t matter, that it would be better to have dozens of authors “free” to digitally publish a wider range of narratives derived from these properties, but I believe that’s a very hard case to make based on market realities and the way we relate (or not) to these types of characters and stories. For instance, how many consumers of the Marvel films and TV shows are serious “fans” rather than semi-ambivalent viewers like me?  I’ll go to these movies or watch these shows and enjoy them for what they are, but I’m not so devoted to, say, the Hulk that I’m going to seek out every variation on this character that I can find. And even among such enthusiasts, I doubt many would actually want 20 different storylines running concurrently—that it is more likely these Hulk fans would be drawn to one or two of their favorite derivative works in this regard. After all, having multiple, simultaneous storylines sort of betrays the serial nature that drew readers to comics in the first place.  Conversely, I do recognize that much of the criticism regarding corporate ownership of these comics properties comes from fan fiction writers and supporters of fan fiction; and this market is not to be dismissed, but neither should it be presumed to replace or supersede the mechanisms that produce highly marketable, job-supporting enterprises like TV shows.  Meanwhile, I will not be surprised if many conflicts viewed by fanfic creators can be resolved in creative ways that balance all interests.

So, in such a free-for-all market, either Hulk fans diffuse and head off in various directions, which interestingly enough, can betray the original argument that the character is part of a “common modern mythology,” or Hulk fans coalesce around a new derivative they like best, thus giving that derivative work a singular market value.  At this point, the creators of said derivative start talking about movies and TV shows and other ways to commercialize this derivative, which brings us right back to the need for intellectual property as the foundation for these substantial investments.

As I’ve said, had the Marvel and DC characters entered the public domain ten years ago, these TV series we have now, would not exist; and this has both economic and cultural implications.  The show Gotham, created by Bruno Heller, tells a narrative of the city beginning in the immediate aftermath of the murder of Bruce Wayne’s parents.  Hence, the show mines imaginative possibilities that are ideally suited to the tradition of the “spin-off” by asking the question, What happens during all those years while Bruce Wayne is still a kid?  While it’s true that, if Batman were in the public domain or copyright didn’t protect derivative works, many writers out there, including potentially very talented ones, may ask the same question and write their own versions. But why is this culturally or economically desirable?

As a consumer, I have time in my life for, at most, one show like Gotham at a time—one version of The Penguin’s backstory, one version of what 12-year-old Bruce Wayne is up to, one version of Jim Gordon’s crusade to clean up his city. And I suspect the majority of consumers feel about the same and have no more time than I to indulge in more than the version of the moment. Of course, if Gotham were not engaging on several levels, then I would have time for zero shows exploring these narratives; and in this regard, the production design of this show alone represents the kind of work that can only begin with a foundation of serious investment in the underlying property.

Unlike shows like The Flash or Arrow, which are set in fictitious but contemporary cities, production design for Gotham poses a whole set of challenges regarding time and place that I think have been very smartly addressed by designer Doug Kraner and director Danny Cannon. The premise of the show is of course a prequel, exploring a narrative before the history we already know; but what this particular past looks and feels like is conveyed through a variety of carefully chosen and maintained design, prop, and textural details.  The City of Gotham is meant to evoke New York but not be New York. Hence, the overall look is achieved with a cross-section of non-concurrent, American design and prop elements. We see interiors and furnishings from 1930s to contemporary; vehicles from the late 1970s to early 80s; VCRs and tube televisions from the mid 1980s; and cellphones from the pre smartphone era that are definitely not contemporaries of the vehicles.  These choices help to set the City of Gotham in a past that is somewhat familiar but also distinct from any particular past as we know it. As Kraner explains in this article for The Guardian, time as conveyed through design becomes a strong narrative element throughout the series.  He describes the police station Bullpen as “…a dark, chaotic, corrupt old world that is very hard for him [Jim Gordon] to fight. It’s established. It’s been there forever. How is this one man going to change it all?”

New York City exteriors are carefully composed and digitally altered to sublimate one of the most recognizable places in the world into a city that isn’t quite recognizable, even to many New Yorkers.  Initially designing and then maintaining this illusion of the City of Gotham, making the city itself a character, is just one component of this show that represents more work than anyone would ever fund absent the rights to the underlying material.  All in, Gotham is a damn good show that fulfills both the creative and economic rationales for retaining derivative works rights in copyright.  And given the demand that today’s small-screen production values must be on par with feature films, the investment in this particular program is made that much more likely by Warner Brothers’ stewardship the DC Comics universe.

ADDENDUM:  Thanks to comments from a regular reader, it seems I should clarify that I do not mean to suggest that great works are not made from sources in the public domain.  Clearly, this is not the case.  It is the nature of comics characters in particular that inspired this essay. As stated in a couple of places, it seems there is an advantage to having one narrative at a time as exemplified in a spin-off work like Agents of Shield, which fills in gaps between one feature film and another.  These rationales certainly do not apply to all works.

Posted in Copyright, Film | 17 Comments

Don’t Blame Disposability on Copyright – Part II

In Part I of this essay, I responded to a post written by Parker Higgins for Techdirt, criticizing him for trying to pack a big, unexamined conclusion into a small article. Asserting, as Techdirtians are want to do, that copyright is the omnipresent saboteur in our otherwise grand, digital machine, Higgins blames copyright’s complexity and length of terms for causing important works of the 20th century to “disappear,” thus harming historical journalism and other endeavors.  He cites a number of what I believe to be unrelated and ill-considered examples, several of which I addressed in Part I. But I left out the most compelling of Higgins’s citations—the work of Paul J. Heald, law professor at the University of Illinois—because it demands a best attempt at a more thorough response on its own.

Technically, Higgins cites Rebecca J. Rosen, writing for The Atlantic about the professor’s statistical research. Heald looks at the availability of published books via Amazon and concludes unequivocally that “copyright makes books disappear.”  To support this claim, he cites his research data, which revels peaks in the availability of books in the public domain and in the availability of very recent books, with a sharp decline in the availability of books from roughly the 1930s to the late 1990s.  And while it is true that this period roughly corresponds to works still under copyright (1923-present), it’s not entirely clear that Heald’s research reveals either a relevant lack of availability, or that that copyright is the catalyst to explain his findings. I have read the part of Heald’s paper that deals with books (he also addresses music) and admit that my reading may err, but I think we should be careful, for instance, about how we interpret summaries of Heald’s work like this one by Rebecca Rosen:

Heald says that the WorldCat research showed, for example, that there were eight times as many books published in the 1980s as in the 1880s, but there are roughly as many titles available on Amazon for the two decades.

To an observer who chooses to look solely at the quantity of available works as a percentage of the total works produced in a given period—and who might have a nascent beef with copyright—this statement may seem rather compelling.  But how many factors are being left out of the equation?  Maybe quite a few.  Heald’s data set comprises a little over 2,000 works sampled at random, which in itself seems like a flaw because a random sampling of ISBN numbers querying the Amazon database should naturally produce a higher percentage of public domain books simply because there are vastly more editions of books not under copyright. Heald does account for multiple editions in winnowing his initial sample of 7,000 titles down to the 2,266 books studied, but he does not seem to account for the probability of skewing toward public domain works by percentage in the initial, random data acquisition.

Additionally, although the researchers seem to have done their best to randomly sample comparable commodities (e.g. fiction novels to fiction novels),  Heald’s findings do not appear to account for more nuanced factors, like the certainty that a higher volume of short-lifespan works was produced in the the 1980s compared to the 1880s.  He acknowledges that total volume would naturally be higher in the 20th century than in the 19th, citing changes in printing technology, but he does not appear to look at the nature of the works themselves and then to ask how much of the sloughed off volume represents natural disposability (i.e. for which there is no sustainable market demand). Heald does address generalized demand in his paper but in a way that also appears flawed, about which more in a moment.

One detail that leapt out for me in Heald’s data is a marked drop in the relative availability (by percentage) of new books available that were originally published the 1980s compared to the rest of the otherwise fairly flat mid-20th century.  Presumably, copyright is a constant from 1923 to the present, so the dip in the 1980s compared to the other decades of the century is likely explained by other factors—factors that may apply throughout the results across the entire range of study.  Hence we should be very wary of a pull quote like this one used at the beginning of Rosen’s article:

A book published during the presidency of Chester A. Arthur has a greater chance of being in print today than one published during the time of Reagan.

Again, that sounds intriguing but may not say quite what we think it does. Based purely on anecdotal knowledge of the 1980s, Heald’s data revealing a noticeable decline in relative book availability would seem to coincide with a decade marked by “conspicuous consumption,” a time when publishers would have been very likely to produce a high volume of relatively disposable works in both fiction and non-fiction. For instance, did the 1980s see a sharp increase in one of the most disposable genres—that guilty-pleasure among women readers known as the romance novel? Certainly, according to Wikipedia, 1980 happens to be the year that Harlequin Romance launched its North American product line. Romance novels as well as books like trade-paperback mysteries, self-help, and diet books tend to have very short lifespans book-by-book; and if publishers did increase their output of these types of products in the 1980s, it could explain part of Heald’s data and have nothing whatsoever to do with copyright terms.

So, the statistical expression “greater chance,” can be very misleading.  If, for example, 60% of the works from 1881 are available compared to 20% of the works from 1981, then the pull quote cited by Rosen is factual but meaningless, particularly as it may or may not inform us about the role of copyright. The conclusion can be accurate but still not tell us whether or not we have a greater number of works available from 1981 than from 1881, to say nothing of the theoretical market value of the unavailable works from the latter year.

Even where Heald limited his data set to works of fiction, his paper does not indicate what kind of fiction is being sampled. And this is a general caveat I would propose when interpreting the entire study:  that without corresponding Heald’s data with some relevant market-research information (i.e. what people are reading and why), we are not learning the kind of information required to draw sound conclusions about the role of copyright.

Consider, for instance, that the most ardent readers among Baby Boomers and Gen-Xers have read many of the books of the 20th century and may even still own copies of their favorites. Hence these most voracious readers are apt to seek out the most contemporary literature and perhaps older works they never encountered, but they’ve already done many of the books of the mid-20th century.  So, what are millennials reading today, either by choice or by requirement in schools and universities? Because there is no question that this generation, for better or worse, has a very different relationship to culture, literature, and media in general than their parents and grandparents.  I know my own kids’ school experience has (much to my chagrin) been lacking in required reading of books we would call the 20th century cannon. Does this hold true in public schools across the country?  If so, how can this, or any other ethnographic study, not be considered in concert with research like Heald’s?

Meanwhile, some rudimentary searching on my own reveals that works from both best-seller and best-books lists of the 1980s are certainly available via Amazon. Though I admittedly did not search every title, it seems that we can find Amy Tan, Toni Morrison, Umberto Eco, and even Danielle Steele, if we are so inclined.  Hence, it appears that Heald’s research may tell us nothing about the rate of availability, decade by decade, of what we might generally agree to call “significant writings.” If that’s a fair assessment, it does not entirely dismiss all of Heald’s findings, but it does suggest that reporters and pundits should be leery of interpreting his data to support the “disappearing 20th century” claim.

Still, if it is true that the availability of “significant writings” from 1980 is actually not that different from the availability of “significant writings” from 1880, this may support Heald’s stated objective in his paper, which did not apparently set out to prove that “copyright makes works disappear.” Instead, Heald’s stated proposal is to refute the assertion that present copyright terms are necessary to keep works in the market.  This may sound like the same hypothesis, but it isn’t.  Setting out to prove that a copyright term of 95 years (for publishers) is unnecessary to keep works meaningfully available is not synonymous with setting out to prove that this length of term “makes works disappear.” It seems Heald began with the former thesis and then shifted to the latter based on what he perceived as “startling” evidence in his data.

One could argue that mechanisms for publishing works in the public domain are as effective as mechanisms for publishing works under copyright and that the public is at least equally served by either regime.  As long as desired works are available, then they’re available.  But, the argument Heald is making is that the current terms are underserving the public because publishers hold copyrights on works still in demand, but also refuse to publish these works.  If this is true, then Heald is presumably correct that the terms of copyright on these unavailable works provide no benefit to anyone.

But in order to make the assertion that publishers are choosing to sequester a relevant volume of books in demand, he needs to prove at least two things:  1) demand for the actual works in question; and 2) that the copyrights on these works are still held by the publishers and not by the original authors.  And if those facts can be demonstrated, one must then make an argument for reducing the length of terms without running afoul of copyright’s incentive to create and publish the most “significant writings” in the first place.  To put that another way, we’d want to ensure we do not fail to incentivize the next Joyce Carol Oates just so that some e-publisher can make a few dollars off books that had earned a natural disposability in the market. Perhaps that is a term length shorter than 95 years, but it seems to me that Heald’s research provides no guidance as to what that revision ought to be.

The Demand for Missing Works

Heald’s research makes no effort to answer the second question I posed above, which is to ascertain the actual copyright status of the books presently unavailable.  This is particularly relevant because with many of the aforementioned short-lifespan books (e.g. trade paperbacks), the exclusive copyrights revert back from publishers to authors rather quickly. And since Heald’s data makes no mention of the types of books selected at random and does not factor for current copyright status of any of these books, it seems unreasonable to draw his conclusions about the motivation of publishers to keep works unavailable based solely on his findings.

On the other hand, Heald does make an effort to ascertain whether or not there is a demand for the unavailable books, and he states clearly that if this demand does not exist, then concerns about unavailability are irrelevant.  But, again, in attempting to determine demand for these works, it looks as though Heald is using information that does not point to a demand for missing works since there are no missing works in the data set and, again, he foregoes market research altogether.

Heald compares the used books available on by decade to the number of new books available on Amazon by decade. The assumption is that the inventory of a used book dealer is an indicator of consumer demand, which is reasonable, but the data reflected only demonstrates that, for instance, there is an availability of used books from the 1970s that is greater than the availability of new books from the 1970s.  Of course, neither line graph tells us anything about sales of either used or new books from the 1970s (to say nothing of which books we’re talking about), but Heald asserts that the gap between the available used books and available new books represents an unmet demand for titles that could be, but are not, sold as new books.

So, without seeking more detailed market information, it seems very hard to leap to the conclusion that an unmet demand for newly published mid-20th century books exists, let alone that copyright is the cause of the problem. After all, his conclusion suggests that a publisher might see profitable demand for one of its titles yet decide not to republish that book for inexplicably self-defeating reasons.  I’m not the most savvy businessman in history, but if I had to decide whether or not to spend money to publish some of my titles from the 1970s, this data would not be sufficient to make that call, not especially without knowing what 1970s titles from that inventory at is actually selling. Meanwhile, once again, I find titles from both best-seller and best-books lists available at both Amazon and

But, it is at this point that Heald seems to depart from the question of general availability, relative either to demand or to production volume by decade, and instead shifts his focus to e-book availability as a measure unto itself.  He writes:

“In 2014, 94% of 165 PD best sellers (1913-1922) were available as ebooks compared to only 27% of 167 best sellers (1923-1932) were made available as ebooks by publishers.”  

Again, this seems remarkable at first, but we should notice, as I say, that Heald has shifted focus from general availability to availability via a specific platform. After all, lack of availability to date in eBook format is not equivalent to lack of availability period. And Heald proves this point himself in citing three particular titles thus:

“In the absence of copyright, surely one could find a publisher providing eBook versions of popular classics like The Gulag Archipelago, Gentlemen Prefer Blondes, and The Magnificent Obsession.”

Surely one could find publishers pleased as punch to freely create eBooks from these works, and for good reason:  that all of these books have deservedly retained their market value. And this is precisely why consumers can still buy print copies via Amazon or in a bookstore, find used copies via multiple sources, borrow them from public libraries, and buy all three as audio books from  The fact that the publishers have yet to make these titles available in eBook format—and there are likely a variety of practical reasons for this—is no excuse for describing these works as “unavailable,” let alone to blame copyright for that false claim, and then to allow this assertion to be exaggerated by pundits and reporters as the “disappearing 20th century.”

Additionally, if one takes a step back, Heald would appear to be making a case for an opportunistic e-publisher (who never contributed anything to the creation of the work) to now reap financial reward from a book by Alexander Solzhenitsyn, of all people, and disenfranchise his sons from any controlling interest in a work published about the time they were born.*  And we would do this for a book that is quite clearly available to the market via multiple sources.

While it is certainly true that simply having all works enter the public domain much sooner would lead to a spike in general availability in mid-20th century books, I think it would take a far more nuanced examination to determine whether that untapped “abundance” would justify diminishing the copyright terms for authors of works whose maintained availability may have a great deal to do with their widely accepted value to society.  At the same time, niche audience works can be restored to public availability by means other than copyright term revision.

One of my dear friends is the son of the author Michael Avallone, who wrote the Ed Noon detective series between 1953 and 1988.  This is the kind of book series that lives in its time and place and then typically goes out of print. But as the co-owner (with his sister) and steward of his father’s copyrights, David Avallone has been able to resurrect Ed Noon, republishing the works as eBooks, and growing a contemporary fan base for the character using social media. This is more a personal project for David than a business venture. In particular, after Michael passed away in 199, the ability to bring back Ed Noon thanks to digital technology has been a very meaningful way for David to give his father’s voice new life, not only for older fans who remember the series, but for a new generation of readers who never heard of Ed Noon.

In theory, if Avallone’s copyrights had expired, it’s true that Amazon or some other on-demand publisher would be free to make these books available—if they could even lay hands on the source material—but the whole venture would be of lesser value, I think, than it is under the management of a loving heir who tweets out Noon-isms twice a day to entice readers. Conversely, if none of this were possible because the copyrights were still in a publisher’s hands who simply chose to let the works be dormant, this would be a shame for both David and for presumptive readers, but it would still not justify claims that a whole century’s worth of literature remains inaccessible. If anything, perhaps it suggests a kind of “use it or lose it” reform to corporate-owned copyrights, but no doubt real copyright authorities would have various opinions about that.

I don’t mean to suggest that Professor Heald’s work is to be dismissed outright, only that the data seems incomplete relative to the conclusions being drawn.  There are certainly more qualified statisticians, copyright scholars, and publishing professionals than I who may criticize or support his findings and determine to what extent they tell us anything about the role of copyright as a barrier to access.  But speaking as a generalist to the general reader, I’ll maintain that we should not simply buy the stuff is disappearing because of copyright story presented so casually in posts like the one by Parker Higgins. Stuff is appearing, disappearing, and being resurrected at an extraordinary rate thanks entirely to digital technology.  The extent to which copyright and its limitations foster or hamper the most beneficial results of all this churning media is not a simple question to answer.


* I do not know the current copyright status of The Gulag Archipelago; I mention this as an example in principle.

Posted in Copyright, Digital Culture, Lit/Pub | Tagged , , , , , | 6 Comments

ITC Ruling Shows Need for Congressional Reform

In August, I wrote a post criticizing the editorial board of The New York Times for espousing Silicon Valley talking points rather than considering the broader aspects of a case concerning the International Trade Commission (ITC).  At issue was the ITC’s claim that it had the authority to enjoin the importation of digital data being used by a company called ClearCorrect to infringe the intellectual property of Align Corporation.  The ITC does have the authority to stop the importation of “articles that infringe” and it argued that “articles” may include digital files; but this week the Federal Court of Appeals rejected the ITC’s claim of authority in this case.  Citing more than ample precedent that the statute does not allow for an interpretation of “articles” to mean anything other than tangible items, part of the decision reads:

Here we conclude that the literal text by itself, when viewed in context and with an eye towards the statutory scheme, is clear and thus answers the question at hand. “Articles” is defined as “material things,” and thus does not extend to electronic transmission of digital data.

Readers should note, however, that the decision is narrowly focused on the definition of the word “articles” and the authority of the ITC based on that definition.  The court is entirely silent regarding any of the broad “free flow of information” criticisms fearful of granting ITC this authority in principle.  In fact, the court concludes thus:

Under these circumstances we think it is best to leave to Congress the task of expanding the stat-ute if we are wrong in our interpretation. Congress is in a far better position to draw the lines that must be drawn if the product of intellectual processes rather than manufacturing processes are to be included within the statute.

In short, the statute and corresponding authority granted to the ITC may be considered by Congress as antiquated in the global digital market, and Congress may consider expanding the statute to anticipate the potential harm of importing intangible “articles” by electronic means.  Indeed, as cited in my first post, the Center for the Protection of Intellectual Property pointed out that this ITC remedy was expressly recommended by the Internet industry as an “alternative” to SOPA.  If granting ITC authority would not have “stopped the free flow of information” in 2011, it is unclear why it would do so in 2015.  Congress should consider broadening the statue to grant ITC this authority for the protection of American companies practicing fair trade.

Posted in Copyright, Law & Policy | Tagged , , , | Leave a comment