Child Safety May Lead the Charge on Platform Accountability

child safety

In my last post responding to the Chamber of Progress campaign for broad liability protections for generative AI developers, I noted that lawmakers are tired of blanket immunity for Big Tech. If the current legislative landscape is any indication, we may finally be at the leading edge of genuine accountability for the myriad harms caused by social platforms operating under the protection of 90s-era immunity regimes.

Yesterday, New York Governor Kathy Hochul signed into law the SAFE for Kids Act, designed to prohibit “addictive” social media algorithms from targeting minors. The legislation treats social media as a consumer product with defective qualities that cause poor physical and mental health outcomes for young people, and which are designed to be addictive. The law defines and “addictive feed” as follows:

“Addictive feed” shall mean a website, online service, online application, or mobile application, or a portion thereof, in which multiple pieces of media generated or shared by users of a website, online service, online application, or mobile application, either concurrently or sequentially, are recommended, selected, or prioritized for display to a user based, in whole or in part, on information associated with the user or the user’s device…” [emphasis added]

The ironically named NetChoice came out swinging on X, calling the New York law an unconstitutional violation of the speech right—and of course they did. But even if Big Tech mounts that legal challenge, I wouldn’t bet on it succeeding. If the argument is that the user of a platform has a First Amendment right to access material which may otherwise be restricted by this new law, that claim should be mooted by the platform’s act of “recommending” or “prioritizing” material in the first place.

As users, we see what the algorithms determine we should see based on data that can be learned about us, and this limitation on user choice mocks the assertion that social platforms are “open” forums for “speech.” For this and other reasons, the state’s narrowly tailored law with the purpose of protecting minors from the harms caused by the addictive (i.e., defective) qualities of a social media product should not be found offensive to the First Amendment.

New York attorney Carrie Goldberg represents a wide range of clients who have been harmed through online platforms—from sexual harassment and assault to kids obtaining Sodium Nitrate on Amazon for the purpose of committing suicide. Referring to herself as a proud co-founder (“Mama”) of the New York SAFE for Kids Act, Goldberg has long argued that online platforms may be held accountable through product liability regimes. In a recent tweet, she notes that it was her failed lawsuit against Grindr on behalf of Matthew Herrick that paved the way for this new legislation:

Carrie Goldberg tweet

Meanwhile on Capitol Hill, legislation with a similar focus may be ready to pass. The Kids Online Safety Act (KOSA) also proposes to alleviate platform addiction for minors and mandates changes in product design to mitigate a range of well-documented harms—from bullying and harassment to unwanted contact by adults seeking to exploit or abuse minors. Sponsored by Senator Blumenthal, KOSA has strong bipartisan and public support. Further, consistent with Goldberg’s “defective product” argument, U.S. Surgeon General Dr. Vivek Murthy proposes a warning label approach to social media, stating, “The mental health crisis among young people is an emergency…”

Frankly, I am not so sanguine on the premise that adults fare much better when it comes to social media use and self-mitigating the hazards of the “feed,” but passing new laws to address harms to children is a good place to start. Assuming KOSA does pass—and there are many other bills in motion—it may be time to declare that Big Tech’s free ride is finally over. Nobody is buying the “progress” and “free speech” rhetoric anymore, which is good because it was never true.


Photo

The Campaign to Defend Generative AI

generative ai

I have not written steadily about AI and copyright because, frankly, it’s exhausting. Not quite as exhausting as watching the state of the Republic overall, but almost as relentlessly incoherent and repetitive. For instance, Winston Cho for the Hollywood Reporter describes a PR and lobbying campaign by the tech coalition Chamber of Progress to defend the importance of generative AI (GAI). The article quotes founder and CEO Adam Kovacevich thus:  “Gen AI is a net plus for creativity overall. It’s expanding access to creative tools for more and more people and bypassing a lot of the traditional gatekeepers.”

That GAI may yield some beneficial tools for creators is plausible, but the whole “access” and “gatekeepers” rhetoric is a misguided anachronism from a group calling itself the Chamber of Progress. Perhaps “Confederacy of Tech Overlords” was too on the nose, but the generalized argument that GAI represents a “democratic” shift away from gatekeepers, stands on the rubble of experiments that have already failed. I doubt there is a professional creator left who hasn’t figured out that Big Tech’s promise to liberate them from traditional gatekeepers is like a human trafficker promising his next victim a job in a foreign country. Whatever was imperfect about the old models, the new models are more exploitative and hazardous for the average creator.

More precisely, while the alleged “liberation” from older distribution channels might have seemed attractive, GAI is about production, and I am confused as to who the “gatekeepers” would be on the production side of the equation. To the extent, say, Midjourney might enable me to illustrate or paint without any drafting or painting skills, the “gatekeeper” is who exactly? Nature failing to gift me with those skills? Or if we think big, and I can make a whole motion picture without ever turning on a camera, I still fail to see who the “gatekeeper” is in the overreaching promise from the tech industry.

Despite how cutting-edge and “essential” GAI is supposed to be, Big Tech has nothing fresh to say in its advocacy. The theme of “democratization” is the same weather-beaten argument they’ve been flogging for years, one that has proven disastrous for information and the state of real democracy—and which GAI can only make worse. Nevertheless, the Chamber of Progress campaign, as reported by Cho, seeks to promote a sweeping policy that AI developers should be broadly shielded from liability, including copyright infringement claims.

The question of copyright infringement for ingesting works for machine learning (ML) is currently at the heart of several lawsuits. I’ve lost track of them all, but arguably the most solid claim to date is New York Times v. OpenAI et al. because the evidence of copying (i.e., that what went into the model came out of the model) is so compelling. On the other hand, it is worth watching those cases where “reproduction” is less evident and, therefore, where the question may be more thoroughly addressed as to whether ML is a purpose that favors fair use of protected works.

As we have seen in defense of social platforms, Big Tech will spray the blogosphere with the term “fair use,” and copyright antagonists (mainly in academia) will echo the broad claim that of course ML is fair use. Notwithstanding the bugaboo that the fair use doctrine rejects the notion of a general exemption, I would argue that the case law points the other way, including the Supreme Court decision in Andy Warhol Foundation v. Lynn Goldsmith. To the limited extent that opinion addresses the ML question at all, its reigning in of the “transformativeness” test is more likely to disfavor the AI developers. Big Tech’s claim is that GAI is broadly “transformative” as a technological accomplishment, but Warhol and other decisions reject such a sweeping interpretation of that aspect of fair use factor one.

Further, as argued in this post, I remain unconvinced that GAI necessarily advances the purpose of copyright to promote new authorship as a matter of doctrine. For instance, if a given work created by GAI cannot be protected by copyright, then the material is, by definition, not a work of “authorship.” As such, this purpose should doom a fair use defense, in my view. Regardless, Big Tech will not be satisfied with the outcomes of any lawsuits, even if the developers win some. What they want is blanket immunity for infringement liability and an affirmation that GAI is truly as important as they say it is. That’s why this paragraph in the Hollywood Reporter story caught my attention:

In comments to the Copyright Office, which has been exploring questions surrounding the intersection of intellectual property and AI, Chamber of Progress argued that Section 230 – Big Tech’s favorite legal shield – should be expanded to immunize AI companies from some infringement claims.

Why highlight that? Because the absence of legal foundation is telling. Not only does Title 47 Section 230 have nothing to do with copyright infringement, but both that law and its copyright cousin, Title 17 Section 512, address the subject of users uploading material to platforms. Neither law says anything about scraping the web to feed material into an AI model for the purpose of ML. Nevertheless, it is clear from reading the actual comments by Chamber of Progress to the Copyright Office that Big Tech recommends policymakers take lessons from both statutes to carve out new liability shields to support the advancement of AI.

Despite the fact that neither §512 nor §230 has proven effective in limiting copyright infringement or dangerously harmful material online, the Chamber of Progress comments reprise Big Tech’s unfounded talking points regarding both statutes. Written by counsel Jess Miers, the comments repeat the false allegation that §512 fosters rampant, erroneous takedowns and also argues that because of §230, “most UGC services go to great lengths to proactively clean-up awful content and provide a safe and trustworthy environment for their users.” Not only will my friends and colleagues fighting Image-Based Sexual Abuse, online hate, and scams be very surprised to learn that, but so will Congress.

One of the scant points of agreement on Capitol Hill these days is that lawmakers have grown weary of liability shields for Big Tech, which has done a poor job of mitigating the worst harms facilitated by their platforms. Section 230 is so ripe for amendment that I’m surprised the Chamber of Progress invoked it, let alone in comments to the Copyright Office which only deals with, y’know, copyright law. More broadly, though, when GAI implies myriad harms beyond copyright infringement, the last thing Congress should do is grant Big Tech more latitude to do whatever it wants in the name of “progress.”  We tried that approach. It sucks.

NYS Assembly Led Down the Primrose Path on eBooks Again

NYS Assembly

In December 2021, New York Governor Hochul recognized that she must veto a bill that would have prescribed the manner in which publishers may provide eBooks to public libraries. It isn’t necessary to rehash the details of that legislation—I wrote several posts about eBook bills—but only to restate the reason for the veto:  the law was unconstitutional. Why? Because state laws proposing to dictate terms for making in-copyright works available, even for libraries, is preempted by federal law.

Nevertheless, Assemblyman Angelo Santabarbara has introduced a new bill (A10544) that, although its mechanisms are different from the 2021 bill, is still unconstitutional. In fact, the operative part of the bill which, for instance, prohibits digital rights management (DRM) technology, would have the effect that a library is free to make eBooks available in any manner it sees fit and without limits of any kind. This plainly violates the Copyright Act. Even if the purpose of the proposal were well-founded in service to the public—and it is not—the states are simply not permitted to pass their own laws governing the terms under which copyright owners may distribute works to the market.

In addition to Gov. Hochul’s clear-eyed veto in 2021, related eBook bills have been proposed, litigated, and shot down in several states, begging the question as to why lawmakers seem determined to die on this meaningless hill. As discuss in this post examining the mid-sized library system serving my region in New York, there is no evidence suggesting that the public is underserved or that the current licensing regimes are so onerous as to harm the operation of libraries. Frankly, even if licensing were onerous, these laws would still be unconstitutional, but the combination of federal preemption and pointlessness does make one wonder–but not really.

These eBook bills are all variations on the same thematic effort by the same lobbying groups seeking to push an anti-copyright agenda using the Trojan Horse of the public library. Copyright antagonists couldn’t prove DRM, governed under DMCA Section 1201, was unconstitutional, so they try chipping away at the principle through state legislatures, masked beneath the white hats of institutions we all love. And indeed, because I do love libraries, I continue to hope that they will stop running interference for organizations that have neither libraries’ nor readers’ nor certainly authors’ best interests at heart. If nothing else, continuing to introduce bills that run afoul of Article I Section 8 is a waste of everybody’s time.


Photo by: vasiliybudarin