America’s AI Action Plan:  Strength Through FAFO

ai

With an introduction that combines the quasi-erudition of Big Tech utopianism and just enough dipshit to sound like Donald Trump, the White House unveiled an action plan on artificial intelligence that is part magical thinking and part policy statement. It all adds up to one bottom line: “Let Big Tech do what it wants, and things will be great.”

In fairness, letting Big Tech do what it wants has a solid bipartisan tradition in U.S. policy, but the naivete of the late 1990s is ancient history. The public and Members of Congress are now well versed in the many negative consequences of laissez faire policy that allowed Web 2.0 to run amok liability free, and yet, despite hearing after hearing with Senators proclaiming outrage at the tech giants, we are poised to approach the development of artificial intelligence with the same doe eyed, babes-in-the-woods innocence of people who cannot learn from experience.

Regarding most of the action plan’s content, the language itself describing each initiative—from science and medicine to national defense—is innocuous and irrelevant. Because no matter what is proclaimed as a goal, the question of AI development that can benefit the American people comes down to guardrails and oversight—and usually some combination of the two. Unfortunately, we have neither.

The tech companies have repeatedly demonstrated that they possess no ethics that would prioritize public interest over profit; the Trump administration has no credibility about anything; and Congress has largely been reduced to performance art, promising for the last eight years that it will finally “rein in tech.” Thus, the AI action plan, like so many plans of the current administration, boils down to fuck around and find out (FAFO).

Unlike major transformative undertakings of the past—the action plan cites the space race—no technology has had the potential to crawl into every aspect of social, economic, and political life as AI. Data runs the world, which means those who control the data run the world. Hence, the promise of what AI could do for society rests entirely on the guardrails and oversight which the industry rejects out of hand and the Trump administration believes are unnecessary.

As one obvious example, a whole section of the plan discusses “empowering American workers,” which is boilerplate for any administration except that most predictions about AI include job loss at unprecedented scale. To navigate this fresh terrain—either to mitigate job loss or address the consequences of job loss at scale—requires leadership that is intelligent and gives a damn about people. But this administration is neither intelligent nor cares about anyone, including the people who voted for them.

Consider one example from a friend who is sanguine about AI’s potential, and who recently mentioned the likelihood that an AI will soon be much better at reading a medical scan or X-ray than a human radiologist. That prediction, and the penumbra of medical advancements it implies, is entirely reasonable but also invites a cascade of ethical considerations that the party of Trump would sweep into the dustbin called “over regulation.”

So, sure, in theory we could be the beneficiaries of faster, more accurate, and cheaper diagnostics with the potential to alleviate scenes like the bottleneck seen every day in my local ER. But ordinary Americans will not truly benefit from these and other promised advancements without public oversight, and I imagine the party that wants to kick millions of Americans off health insurance rolls and scale back essential services doesn’t give a flying fuck.

As the mantra about Web 2.0 preaches, “If the service is free, then you’re the product.” That was and remains a dangerous consequence of social media platforms. But what happens when the same principle applies to access to medical or other critical services wholly controlled by one or two AI companies—say Meta or Amazon?—run by leaders with zero morals? Maybe the out-of-pocket price comes down initially and revitalizes the utopian “age of abundance” rhetoric from tech’s cheerleaders, but the real implication of having a handful of companies providing essential services is technological feudalism. And that prospect undermines the animating imperative of the AI action plan—i.e. to “beat China” in this new cold war.

But at present, we’re just too stupid to beat China at this game because we’ve reached a crisis state when we have no fucking idea who we are as a nation anymore. One section of dipshittery in the action plan states that AI will “protect speech and American values” and that these ends will be achieved, in part, by “eliminat[ing] references to misinformation, Diversity, Equity, Inclusion, and climate change.” Right. Fuck the best and brightest for the sake of the vested whitest. Because that’s how we won the space race, right? Without mathematicians Katherine Goble Johnson, Dorothy Vaughn, and Mary Jackson.

Meanwhile, the White House built on misinformation, sedition, hate, crime, grift, ignorance, and authoritarian tactics has a plan to “protect American values” by behaving exactly like China and other anti-democratic societies. I’d call that burning down the village to save it, but especially with AI, it’s actually erecting a Potemkin Village and calling it America. Web 2.0 has been a multipronged disaster because congressional leaders on both sides of the aisle naively decided to let the experiment run for years until finally expressing regret circa 2017. Now, we are poised to double down on the errors of the 1990s and FAFO with a tech fraught with uncertainties for the American people. Beat China? We’re not smart enough to meet the moment.


Photo by Yacobchuck

D.C. Event Shines Light on Advertisers Supporting Social Media Harm to Children

social media

When I was a kid in the 1970s and my father was a principal in an ad agency, they had the Ameritone paint account, and I remember him explaining that they were not allowed to show paint and food together in a commercial lest a child viewer be confused into thinking that paint might be edible. By contrast, a social media platform today is free to conflate child-focused material with illegal drug offers and numerous other conduits leading to serious harm or death. And it’s all swept under the rug of innovation and commerce.

Algorithms kill kids. Let’s just call it like it is at this point and stop pussyfooting around the rhetoric that social media platforms are neutral platforms for “information.” Never mind that information itself is almost a lost cause on social media, but the effects of algorithmic manipulation—even simple recommendations—can have disastrous effects for children and teens, including depression, anxiety, suicide, and accidental death. And that was before AI.

As reported last September, the accidental suicide of Nylah Anderson, age 10, was the result of TikTok’s algorithm prompting her to try the “blackout challenge,” which entails making a “game” of self-asphyxiation. In the case against TikTok for its role in leading Anderson toward the “blackout challenge,” the Third Circuit Court of Appeals articulated one of the few rational reads of the Section 230 liability shield. The court stated:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Brought to You by Your Favorite Brands

Add to that cauldron the major brands whose advertising dollars unconditionally support social platforms, and that was the focus of this morning’s event held at the National Press Club. “We saw a great turnout,” says cyber-analyst Eric Feinberg, who has been engaged on ad-supported toxic social media content since 2013. More than 40 attendees filled the 40-seat room for the kick-off event designed to focus the attention of major brands on the fact that their ad dollars finance platform operations that cause serious harm and death to children and teens.

The event was organized and hosted by parents who have been working to turn personal tragedy into social change through both public policy and private action. For instance, one mother who spoke was Debra Schmill, who started the Becca Schmill Foundation after losing her daughter Rebecca to fentanyl poisoning from pills obtained with the “help” of social media. Becca’s death was the culmination in a cascade of terrible events intersecting social platforms—beginning with a rape at the age of 15 that was followed by cyber-bullying and the consequent battle with depression that led to the fatal pills obtained online. Deb Schmill is one of many parents determined to prevent other children and families from suffering similar fates.

“Women make 70% to 80% of all purchasing decisions,” Feinberg explained to me by phone after the event, “and these mothers who spoke today recognize that mothers just like them are funding social media harm to their own children.” Posting his daily mantra that “Brands are buying while kids are dying,” Feinberg has recently taken swings at McDonalds for its crossover promotion with Snapchat…

He makes a solid point. If a major brand overtly promoted the opportunity for kids to get closer to the local drug dealer, pimp, or sexual predator, parents would be outraged. But because social media is an insidious free-for-all, inhabited by good and bad actors, the worst vices are either overlooked or accepted as the cost of obtaining the virtues. But this is a false choice. Multiple defectors from these companies have made clear that the platforms bend their own rules and tweak their algorithms to promote anything that drives “engagement,” without regard to the consequences. And they assume the mainstream advertisers will keep paying without condition because they own all that engagement.

But as Meta whistleblower Sarah Wynn-Williams describes in her book Careless People, that company made an affirmative decision to target known teenage psychological vulnerabilities (e.g., body image) to promote certain products. This abuse of the technology is already unethical—a far cry from not showing paint and food on the same screen—and advertisers who knowingly exploit the “opportunity” should be held accountable by consumers. Meanwhile, as the organizers of today’s event strive to emphasize, that same algorithm exploiting the teen’s vulnerabilities will just as readily push dangerous drugs toward the child as promote a makeup product or gym membership.

By my lights, asking the advertisers to partner with their own consumers—the parents who buy their products—to pressure the platforms to adopt better practices is the very least they can do. In just a couple of months, it will be time for the ~$40 billion Back-to-School season, and as brands vie for the K-12 parents who make those purchases, they owe it to those families to pressure the digital-age media companies to stop killing kids.

Major Film Studios File Copyright Suit Against Midjourney

midjourney

“Midjourney is the quintessential copyright free-rider and a bottomless pit of plagiarism.”

Well, there it is. If you had been wondering whether and when the major studios would file a copyright infringement suit against a developer of generative AI (GAI), it finally happened on June 11. Disney and its subsidiaries, along with Universal Studios, filed a complaint against developer Midjourney alleging copyright infringement of many of the studios’ most famous and valuable intellectual properties. In the broader AI saga, the lawsuit is a big deal, though what it means for creators in general is hard to say. The studios imply that their complaint is a slam dunk, and it probably is. As the brief states:

This case is not a “close call” under well-settled copyright law. Midjourney set up a lucrative commercial service by making countless, unauthorized copies of Plaintiffs’ copyrighted works, and now sells subscriptions to consumers so that they can view and download copies and derivatives of Plaintiffs’ valuable copyrighted characters. That is textbook copyright infringement.

The 110-page complaint hardly expounds upon legal arguments and, instead, presents substantial evidence that Midjourney willfully violates the reproduction, derivative works, public display, and public distribution rights of the plaintiffs. Both as a matter of input (model training) and output of prompted materials, the studios compellingly show that their highly valuable works went into the system and that those same works come out of the system with simple prompting by users. Outputs also include expressive details like lighting and production design copied from the motion picture environments associated with famous characters.

For example, the user doesn’t just produce Darth Vader but can obtain a rendering of Vader on a Star Wars ship with lighting and production values that result in a highly detailed unlicensed derivative. The studios also allege that Midjourney itself publicly displays these outputs as a means of promoting its product’s capabilities. And finally, the studios anticipate that Midjourney will claim not to be the direct infringer and, therefore, allege that the developer is liable for secondary copyright infringement by facilitating mass infringement among its customers.

Aaron Moss, on his blog Copyright Lately, contends this case stands out among the roughly 40 active lawsuits against AI developers because the studios present so much compelling visual evidence of mass copying of protected works. While other GAI cases like Suno, Udio, and NY Times also present substantial evidence of infringing outputs, I think Moss is right that in the visual arts cases, plaintiffs rely more on the assumed infringement by means of model training more than they show obviously infringing outputs. In Disney et al., any ordinary observer can see that the characters and worlds produced are precise copies of iconic IP belonging to the studios.

Personally, I don’t see how Midjourney defends itself and, therefore, assume this case will be settled rather than fully litigated. Whatever comes of that settlement, motion picture studios generally have an interest in the development of GAI, which raises questions about independent creators and workers in the industry, as well as the future of filmmaking itself.

GAI and the Future of Filmmaking

Historically, the independent creator and the “line worker” in the industry both benefit from copyright enforcement actions of the institutional creator. For instance, when the studios go after pirate sites, the indie filmmaker benefits from a legal action she cannot afford to take, and the many crew professionals have their livelihoods protected. But with GAI, not only are the studios not seeking to shut down the developers, but they are presumably interested in the prospect of using the technology to produce motion pictures with fewer workers. This longstanding ambition of various film executives may not be attainable, but many professionals are certain that it will be attempted.

In conversations with friends in the industry, opinions vary, including those who find that GAI tools present intriguing opportunities for independent creators to produce new kinds of work at lower cost and greater speed. But at the same time, anxieties are high that GAI will mean job loss in every department of a typical production, including writers, directors, and actors.

There is no question that many motion picture jobs are threatened by GAI, but concerns that the future of filmmaking will be reduced to a few executives overseeing a roomful of programmers may be unwarranted. While GAI motion visuals are impressive and improving rapidly, it is also a shiny new toy that forecasts various cultural, financial, legal, and technological questions yet to be answered. And that’s just for film production.

My long-held view is that it was Star Wars (1977) that short-circuited the era known as the American New Wave in cinema because suddenly the prospect of the mega-franchise was much bigger than the movie itself. The end of the studio contract system resulted in greater creative independence, and a group of young directors, including George Lucas, produced what many consider a brief golden age of American motion pictures that were simultaneously box office hits, critical successes, and award winners.

When Star Wars demonstrated the potential of the film as franchise, the concurrent influx of MBA types into Hollywood amplified a new tension between film as art and film as product—i.e., the tension between filmmaker and film executive. Within that tug-o-war, it is only natural for the “suits” to want to produce as much product as they can with as little labor as possible—let alone expensive labor with opinions! Now, GAI theoretically presents that opportunity, though maybe not to the extent that many seem to either pessimistically or optimistically assume.

Usually, when the “suits” overreach with their analytics and try to predict what the market wants, the results are unimpressive. A theme I have repeated on this blog many times is that audiences want works that surprise them, not works that have been analyzed to death—and to produce fresh work takes artists, not algorithms. This rule, if it is a rule, implies a boundary that rejects the most stark predictions that major motion pictures will soon be made by three guys and a computer.

Potential Limits of GAI in Motion Pictures

Clearly, GAI output will continue to improve, and subtle renderings of naturalism will be attainable, which does imply that a whole motion picture with “human” characters can be produced without a camera or actor being involved. That this describes the future of all cinema seems unlikely, though it is notable that among the evidence presented in Disney et al., only one image is a depiction of a natural person (Mark Hamill as Skywalker), while every other infringing image is either a masked character or was originally made with pen and ink or computer illustration. Thus, the GAI’s ability to render these particular derivative works implies precisely the franchise material that could be produced without anyone building a set or rigging lights or pointing a camera at an actor. Still, there are limits.

For instance, the current Marvel franchise was primed with the first Iron Man (2008), the success of which owes a LOT to the performance by Robert Downey, Jr. This implies a caveat that films made without human artists can become what we might soon call the slop-flops of this dawning GAI era. Still, without a crystal ball or room to explore all the implications of eliminating one type of creative professional or another, a limiting factor for overuse of GAI may be copyright itself.

Because the human authorship requirement is, and will likely remain, a bedrock principle of copyright protection, GAI enables the production of a very large volume of unprotectable expression. Additionally, if two creators are using the same product, the likelihood of substantially similar, but independently created,[1] works may increase as well. In this light, creators, large and small, might want to be wary of overreliance on producing GAI material that may lack copyright protection. This concern would apply with greater force to newer characters, interpretations, and/or imagined worlds, if authorship in these works could be challenged on the basis that the expressions are the result of machine interpretation of the idea rather than human artists expressing the idea.

Personally, I would love to see the GAI genie stuffed back into its bottle because I believe that on balance the technology produces more social harms than benefits—and because the Techbros have zero credibility when it comes to ethical development or application of any of their products. But knowing that genie’s bottle has been shattered, I recognize how the technology can be used as a tool for new creative expression and am hopeful that lawsuits like this one at least push the application of GAI in that direction.


[1] Works that are independently created are, by definition, non-infringing even if they are substantial similar to other works.