The Future Was Then: AI Moving Us Backwards on Carbon Emissions

Coal-fired power plant. Cost of data centers.

As the Super Bowl approached and passed, it seemed that one faction of Americans was accusing Taylor Swift of practicing witchcraft on the NFL while another was slagging her for the carbon output of her private jet—reportedly about 8,300 tonnes of CO2e in 2022. And although it is fair to expect owners of private aircraft to fly responsibly, I must ask this:  What is the environmental value of not shitposting about Taylor Swift? Or for that matter, any number of topics?

The carbon cost of a single tweet is ~.026g; the cost of X (nee Twitter) is estimated at 8,200 tonnes per year; and the overall carbon cost of social media is estimated at 262 million tonnes of CO2e per year. So, if we use this social media carbon calculator, it tells us that 1 million people spending just 2 minutes a day on the 10 major social sites costs just over 8,300 tonnes of CO2e per year—roughly the same amount T Swift reportedly generated with her airplane in 2022.

C

I recognize that this is comparing the carbon footprint of one individual to a million individuals, but that one individual entertains millions and generates economic activity. By contrast, the social posts of a million people at any given moment are only making pollution in every sense. Clearly, it costs metric tons of carbon to produce metric tons of useless noise. And that preamble brings us to the topic of the projected increase in electricity demand for data centers to support advancements in artificial intelligence (AI). As Bloomberg reported in late January:

Electricity consumption at US data centers alone is poised to triple from 2022 levels, to as much as 390 terawatt hours by the end of the decade, according to Boston Consulting Group. That’s equal to about 7.5% of the nation’s projected electricity demand. 

In past posts about generative AI, I have opined that we do not need machines to make creative works—because we don’t—and that AI should be tasked with solving problems like curing disease or mitigating the climate crisis. On the second point, however, it seems that if an AI were asked the climate question, its only rational answer would be, “Shut me down.” If nothing else, AI could be an environmental catastrophe in the making.

“In the Kansas City area, a data center along with a factory for electric-vehicle batteries that are under construction will need so much energy the local provider put off plans to close a coal-fired power plant,” the Bloomberg article states. Because that quote cites both electric vehicles (EVs) and the data center, one must acknowledge that the environmental analysis of EVs entails a projection of carbon saved against carbon spent. But because a data center is pure carbon expenditure, that cost can only be measured against the value of the activity the center supports.

No question that data centers are infrastructure. There is no enterprise—private or public—that does not rely on networked computing, and economic activity almost always presents an environmental challenge, whether one is building a railroad or an eCommerce platform. But considering even the current energy demand, let alone the projected increase, AI pulls the issue into focus because so many of its applications are already either useless or toxic.

Useless, as stated, is the AI that generates “creative” work in lieu of the human creator, while toxic would be something like more advanced deepfakes exacerbating the disinformation crisis. Regarding the former, this flips the economic equation—i.e., carbon cost yielding lost jobs, which is arguably the opposite of economic activity. Regarding the latter, the use of AI to expand and deepen disinformation campaigns represents carbon cost in exchange for “better tools” that have already been used to weaken democracy worldwide.

In 2013, I wrote a post called Show Me the Innovation—one of many responses to the generalized argument that legal frameworks designed to protect intellectual property, privacy, information integrity, and even personal safety all stand in the way of “innovation.” The point then, as now, is that not everything produced by Big Tech is “innovative,” if we insist that word mean something. If “innovation” should improve lives and foster prosperity, isn’t it curious that social media’s carbon cost helps support anti-science agendas like climate change denial?

In a recent post about the environmental cost of data centers, Chris Castle cites Science Daily, noting that “generative AI like ChatGPT could cost 564 megawatt-hours (MWh) of electricity a day to run.” That’s more than some small countries. When coupled with the fact that data center demand is halting planned shutdowns of coal-fired plants, then it starts to look a lot like AI is helping to “innovate” the U.S. backwards, reversing the gains made over the past twenty years in carbon emissions.

Traditionally, it is possible to do a cost/benefit analysis. We burn x amount of coal to power y number of homes, or we need x amount of oil to run y amount of ground transportation. And even in the earliest days of electrification or automobiles, the benefits were self-evident. But with rapid advancements in AI, the cost is rising without clear evidence of benefit—at least not at the scale the electricity demand implies. This is because, like so many “innovations” of Big Tech, AI might be used to accomplish something extraordinary like improving medical diagnoses, but in the meantime, it will be used make what is already bad about digital life suck faster.


Photo by: dropthepress

Truth Dies in Broad Daylight

Democracy dies in darkness according to the motto of the Washington Post, and this is, of course, just one of many phrases reciting the axiomatic theme that credible and responsibly reported information is the blood of a democratic society like the United States. If true, then why has the “information age” brought democracy itself to the brink of destruction?  There are many answers, including from those who would say that the question itself is alarmist—that, for instance, the “democracy in peril” narrative is a talking point of the political left with no foundation in evidence. But ain’t that the rub? Have we not crossed the event horizon of an epistemic crisis?

It bears repeating that a healthy democracy not only tolerates, but requires, a debate of competing ideas; but thanks largely to the major internet platforms, society has devolved to a shouting match of competing realities. No technological singularity required. We have already carved out a point in our little corner of spacetime that is dense enough to prevent truth from escaping. It may be self-evident that truth dies passively in silence, but truth can also be trampled to death by noise, and how could “democratizing information” ever have produced anything but a cacophony?

In a recent editorial for the Los Angeles Times, Anita Chabria asks Why is it OK for rich guys to steal my work? She writes…

Retail theft is causing a civic meltdown and inspiring a ballot measure to incarcerate repeat toothpaste thieves.

But billionaire tech bros dismantling democracy for profit, stealing thousands of times a minute by selling advertising against something they don’t own? That barely gets a shrug, even as more media professionals are laid off, more publications close, and reliable information becomes so scarce and hard to spot that truth itself has become political.

Some might argue that news organizations have lost so much credibility that it hardly matters, and I cannot deny that I have read my share of careless articles under the imprimatur of respected brands, including the WaPo. But notwithstanding cultural and social changes that ebb and flow through any industry, the bottom line is that good investigative journalism is expensive, highly skilled, and time consuming, and the internet industry has only served to make those obstacles larger, if not insurmountable.

First, social media fostered, and still perpetuates, an illusion that “citizen journalists” and raving pundits consistently uncover hidden truths which are obfuscated by the mainstream media. Second, social media demands feeding the beast 24/7, which forces the traditional news organization to prioritize speed over quality, thereby often fulfilling the prophecy that mainstream news is untrustworthy. And finally, the major social platforms resist paying for the news material they exploit for profit. In combination, how can these forces not cause a downward spiral in professional journalism, including the layoffs now being reported? And that’s before we truly see AI alter the landscape.

While it is impossible not to point to Trumpism as the paradigmatic—and potentially fatal—symptom of rampant conspiracy-mongering, the folly of democratizing information is shared across the political spectrum. The internet industry told the world that their platforms were the antidote to media conglomerates—the proverbial “gatekeepers,” who controlled, and even buried, the information to which people are entitled. And thus, Big Tech’s assault on copyright law often rode atop the half-baked slogan that “information wants to be free” in both senses—liberated and gratis. And everyone—nearly everyone—believed that bullshit.

Although copyright is commonly associated with creative and entertainment material, it was nonfiction works, including journalism, that were at the center of the constitutional framers’ attention when they drafted the “progress clause” in Article II. There’s a reason why that clause says, “to promote the progress of science,” and in one of my favorite papers about the adoption of copyright at the founding period, Professor Jane Ginsburg notes, “Petitions to Congress before enactment of the first copyright statute sought exclusive privileges for works overwhelmingly instructional in character.”

A century later, copyright protection would encompass a broad range of creative and performing arts, but at the outset, the framers understood that the Republic would fail in persistent darkness. Thus, the speech right, the press right, and copyright can be seen as working in concert toward the hope that future generations would have the “science” necessary to sustain the American experiment. Now, just over 230 years since the first Copyright Act and the Bill of Rights, I am hardly alone in wondering whether that “science” is lost, symbolized by the fourth estate shedding 500 jobs in January alone.

In 2021, Senator Klobuchar first introduced the Journalism Competition Protection Act (JCPA), which would provide a limited exemption to antitrust prohibitions against collective bargaining among news media organizations. Passage of the JCPA would enable news media companies to negotiate terms with giants like Meta, Google, et al. for licensing news content shared on those platforms, and Chabria cites a study from the University of Houston, which states that, with passage of the JCPA, the major platforms would owe news organizations between $11.9 billion and $13.9 billion per year. So, of course, the tech giants have used their lobbying power to block the bill.

Meanwhile, Big Tech continues to argue that they should not pay news organizations anything because their platforms “drive traffic” to the news channels. Artists will recognize this as the “exposure” rationale for piracy, and it takes some chutzpah to keep peddling this nonsense against a backdrop of layoffs and closings. Because it doesn’t take an economist to know that traffic alone does not pay for overhead and salaries—and that’s even if Google et al. actually increase traffic relative to pre-internet readership.

What we know for sure is that a democracy without a robust and free press is in danger of no longer remaining a democracy, and we know that news organizations have historically struggled to be financially sustainable. As the internet industry has done with music, motion pictures, literary works, etc., they sold the promise of access to news and information while siphoning the revenue that pays people to produce that material in the first place. And as we are witnessing in real-time, the vacuum is filled with charlatans, liars, cowards, and thieves. Thus, the proverbial “sunlight” promised by Big Tech is not a disinfectant, but a poorly made pesticide that animates the weeds and kills all the fruit.


Photo source by: Mediaphotos

Fraud in Music Streaming on Legit Platforms

By now, many people who pay attention to artists’ rights have read the David Segal New York Times story published on January 13 about the amateur folk duo Bad Dog discovering their songs on major streaming platforms, but with different titles and attributed to a different creator. In what should be a surprise to nobody, it is easy to game the music streaming system and siphon from the revenue pool, even if you’ve never composed or recorded a song in your life. It’s a classic case of The Internet Giveth, and The Internet Taketh Away—because many DIY tools promoted to help new artists launch careers can be used by bad actors engaging in fraud.

“David Post and Craig Blackwell have been devoted amateurs for decades, and they’re long past dreams of tours and limos,” Segal begins. Post and Blackwell are, oddly enough, both D.C. attorneys—and cyberlaw and copyright law attorneys to boot. Although they were more interested in regaining control of their music than the revenue, their difficulties point to the fact that control is everything, especially if the artist does care about revenue.

The problem lies in the fact that anybody can create an account with a music publisher/distributor, rename and reattribute a bunch of pirated tracks, and then upload the songs to multiple platforms to hijack the revenue that belongs to the real artists. Based on the Segal article, it seems that not all publisher/distributors are equal when it comes to verifying authorship or ownership of the tracks. The article cites the service called Level used in the Bad Dog incident, and I cannot comment on the specific anti-fraud efforts of all these services.

Naturally, this scam won’t work with mega hits for a variety of reasons, but for a niche indie like Bad Dog—or more critically for the new artist who is trying to start a music career—songs that are relatively obscure make a perfect target. The scammer won’t earn much from a small heist of songs, but at scale, the dividends can obviously provide sufficient passive income to make the “effort” worthwhile. I recommend the NYT article for a full account, but two segments struck me as deserving of comment—one editorial, and one semi-mercenary.

Citing these segments out of order, first is a comment by David Post, referring to the evolution of the Digital Millennium Copyright Act (DMCA) and the appearance of Napster. “In 1997, I don’t think people were thinking about this automated operation that just sucks up unprotected material, rejiggers it to make it unfindable and uploads to platforms where they can start monetizing it. That wasn’t on anybody’s radar.” For context, Post alludes earlier in the article to his own copyright skepticism, which echoes views many would describe as “copyleft.” But this was on nobody’s radar?

Okay, maybe this exact method of scamming was not envisioned in 1997, but it hardly takes a leap of imagination to see the progress from illegal P2P to legal downloads in tandem with illegal downloads, followed by legal streaming in tandem with illegal streaming. In fact, it doesn’t take any imagination because copyright piracy for profit has been a reality for about 30 years, and there are few systems on the internet that cannot be gamed—especially if the legit platform operator lacks an incentive (i.e., is shielded from liability) to remove scammers. Also, the nature of Bad Dog’s problem did not suddenly appear in late 2023. For instance, I wrote about fake Bob Seger and music tracks owned by Spotify in 2017, which can be seen as a prelude to the kind of scam at work in this instance.

So, if Post is suggesting the DMCA needs overhaul to address workarounds to notice-and-takedown, I welcome him to the cause because professional creators have been shouting into a hurricane for at least 20 years about the near uselessness of the provision as a viable remedy to piracy. Given Post and Blackwell’s day jobs, I do wonder whether they only just discovered the issue the moment it affected their music, but in any case, the DMCA brings me to the other quote from the article that I want to highlight:

To retrieve their songs, Mr. Post and Mr. Blackwell sent out what are called takedown notices, or formal requests to remove pirated music, to a bunch of different sites. The band members used their SoundCloud page to demonstrate that their recordings predated all the uploads on the streaming platforms.

As stated, the DMCA takedown provision is middling at best. Segal reports that Amazon and YouTube removed the pirated tracks quickly, but Apple and Spotify did not. What struck me about the above paragraph, though, is the duo’s use of their SoundCloud page to prove priority and ownership of the work, which is kind of a digital-age version of mailing a copy to oneself—a.k.a. the “poor man’s copyright,” which is meaningless as a mode of legal protection. That brings me to the slightly mercenary point I wanted to make that the musical artist in this same position would find it both easier, and possibly more effective, to send the Copyright Office registration numbers associated with the works that should be removed by DMCA takedown.

One aspect of a registration is that, by operation of law, it is prima facie evidence of ownership. Walk into federal court with those registration certificates, and the burden is on the opposite party to prove that you’re not the owner of the work. In fact, without registration, you can’t walk into a court with an infringement claim, but with regard to a DMCA takedown—especially sent to one of the major platforms—the registration is literally a government seal establishing ownership of the work. It doesn’t guarantee that every platform will expeditiously comply with a takedown request, but it does give them a good reason to do so.

Further (and this is the mercenary part) because I am a passionate advocate for the rights of independent creators, I highlight this incident as the co-founder of a software business called RightsClick. A suite of tools designed to make copyright management easy for the entrepreneurial creator, the app facilitates fast, simple registration that simultaneously builds a database of Titles with their associated registration numbers. Thus, the indie musician in the same position as Mad Dog could look up those numbers in about two minutes and include them in a DMCA notice. Again, not a guarantee of compliance by the platform, but a stronger incentive. Including registration numbers is, after all, what the attorneys prefer to do when they send takedown notices.

I hope readers will forgive the plug for RightsClick in this instance. I generally keep IOM commentary and that venture separate, but this story seemed like a good moment to don both hats. Regardless, the point worth emphasizing is that indie artists should register their work with the Copyright Office. No creator should ever be required to prove they own the work requested for takedown—the provision is already subject to penalties of perjury—but to the extent the platforms stall or play games in this regard, a registration number is a lot better than any other evidence one might otherwise provide.


 

UPDATE/CORRECTION:  Thanks to a representative of Bad Dog, who wrote to tell me that the duo did file a registration application for the album The Jukebox of Regret very soon after discovering the music had been pirated. This same source also states that the music was pirated within one week of publication to SoundCloud, hence the immediate use of that information to show priority and ownership. Based on this information, I wish to correct any implication that Post and Blackwell completely ignored copyright registration, though I would encourage indie artists to register before distributing work to the market. This story proves how quickly your work is likely to be pirated.