As AI Moves Fast and U.S. Policy Flounders, Will Organizations Look Abroad for Data Security?

data security

Last week’s firing of the head of the National Security Agency and U.S. Cyber Command, along with his deputies, is one more reason to conclude that the United States is not led by serious people. As the administration waves off the implications of Signalgate and then fires Four-Star General Timothy D. Haugh et al. on the reported basis that Laura Loomer told Trump they are “disloyal,” any common-sense observer will justifiably doubt whether national security is a priority for this administration. Concurrently, one wonders whether the administration’s security clumsiness, combined with its deepening relationship with U.S. Big Tech leaders, will foster anxieties over data security as organizations in every sector develop new AI models that will be tomorrow’s attack vectors.

While U.S. Big Tech praised Trump’s revocation of the Biden EO on AI as an end to regulation, the move could erode confidence for many organizations that need to develop AI in environments provided by domestic suppliers of confidential computing services. Although the U.S. remains a leader in cybersecurity, Americans are targeted by cyberattacks more than any other country, and rescinding the Biden EO did not reverse any regulation. On the contrary, exacerbating the U.S. history of laissez-fair cyber policy, Trump has been a direct beneficiary of data abuse and micro-targeting misinformation; and more than half of all citizens likely assume that our private data is not only insecure, but that the current administration would not scruple to exploit it for the most draconian purposes.

For my recent post about Section 230 reform, I spoke with Peter DeMeo, Chief Product Officer of Phoenix Technologies AG in Switzerland about agentic AI as both opportunity and threat. Not yet fully realized, the principle is that an AI agent can act autonomously to improve or maintain a given system. “But you want to keep the agents in a good place,” DeMeo says. For instance, he describes a Swiss hospital group where the IT infrastructure crashed overnight, but the staff found the agent had fixed the problem and kept operations running. This kind of positive result, however, should not mask the fact that AI agents are new attack vectors. DeMeo explains…

Imagine a foreign adversary infiltrating a hospital’s network through a sophisticated phishing attack, poisoning the AI agent’s data and turning it malicious. Unaware of the compromise, the IT team deploys these sleeper agents into a trusted execution environment—a secure enclave, where they can operate autonomously. From within this stronghold, the malicious agents launch a next-generation ransomware attack, encrypting critical system data. Surgeons and medical staff are locked out, unable to access patient histories, scans, and essential systems—crippling hospital operations and endangering lives.

Is the U.S. a Robust Data Security Environment?

America’s data security landscape comprises a patchwork of federal law, state law, and what might be fairly described as an honor system among many major providers of confidential computing services. U.S. policy (i.e., let Big Tech do what it wants) combined with “operational assurance” (i.e., trust the provider to do what it says) may not provide the kind of confidence various organizations demand as they develop and deploy agentic AI. And that was before DOGE’s questionable access to, and haphazard handling of, sensitive information—or before Trump fired the top cyber security official without cause.

Meanwhile, a key indicator to follow in this context will likely be the insurance industry. For instance, Chubb, a major provider of cyber insurance, released its first Navigating the Cyber Claims Landscape report early this year. The report shows, for instance, ransomware incidents increasing in the U.S. while they are declining outside the U.S, and it explicitly states that “A zero trust security model is essential to maintain controls.”

If organizations look outside the U.S. for confidential computing, Switzerland could emerge as a hub for the level of data security needed to confront the vulnerabilities inherent to agentic AI. For instance, Phoenix’s business model combines decades of confidential computing experience, compliance with Switzerland’s stringent data protection laws, and pricing tiers that make confidential computing accessible for small and mid-size organizations. Rather than “operational assurance,” as Chief Technical Officer Angel Nunez Mencias, explains, Phoenix provides “technical assurance,” meaning that only the customer holds the encrypted key to their own data. There is no “back door,” and it would not be possible to make a customer’s data available to a third party—not even with a warrant issued under the U.S. Cloud Act.

In compliance with the Swiss Federal Act on Data Protection (FADP), not only must the customer approve every change deployed, but statutory provisions include strict civil, and even criminal, liabilities for mishandling certain data—especially sensitive information about natural persons. Asked whether this approach to security might inadvertently provide opportunity for cybercriminals or terrorist organizations, Mencias notes, “Confidential computing is not a black box. Just as the customer must approve every change, we approve the software deployed in our environment.”

IT professionals at organizations in the U.S. and abroad will decide whether providers like Phoenix offer a more secure environment for advancements in agentic AI computing, but the value proposition DeMeo describes provoke questions that were difficult before the current U.S. administration began breaking things. Now that it shall be the policy of the United States to cede the field of excellence in a wide range of disciplines, it is fair to ask whether various organizations will look elsewhere for data security.

DC Circuit Affirms Human Authorship Required for Copyright

human

In a decision that is unsurprising but important, the DC Circuit Court of Appeals affirmed that “authors,” as defined in U.S. Copyright Act, are human beings and not machines that can autonomously generate works. I say unsurprising because nothing in history or statute should have led the court to any other conclusion, and indeed the opinion can be summed up thus: “…the text of multiple provisions of the statute indicates that authors must be humans, not machines.”

Dr. Thaler, a computer scientist, developed a generative AI (GAI) he calls Creativity Machine, which autonomously generated a visual work for which he applied for a claim of copyright with the U.S. Copyright Office. Thaler disclosed that the work was wholly created by the machine, and on the basis that copyright can only attach to works made by humans, the Office rejected the application. Thaler sued, arguing that the Office was asserting a policy not found in the statute or the constitutional foundation for copyright. He lost in the district court, and the appellate court has now affirmed that ruling. (See earlier posts.)

Specifically, the court cites several operative provisions of the Copyright Act that would be nonsensical if machines were “authors.” “Machines do not have property, traditional human lifespans, family members, domiciles, nationalities, mentes reae, or signatures,” the opinion states. This summary refers to the right to own any kind of property, duration of copyrights, inheritance of copyrights, jurisdictional enforcement of copyrights, incentive to create works, and the right and authority to transfer copyrights.

None of those rights or capabilities apply to non-humans, and non-humans do not have standing in court to adjudicate conflicts over such matters. Consequently, U.S. copyright law would unravel if machines were “authors,” which would, notably, moot Dr. Thaler’s claim that his GAI called Creativity Machine is legally the “author” of the visual work he sought to protect. “Numerous Copyright Act provisions both identify authors as human beings and define ‘machines’ as tools used by humans in the creative process rather than as creators themselves,” the opinion states. Imagine the opposite conclusion and Creativity Machine could be named as a plaintiff in an infringement suit. Chaos ensues, and not just for copyright.

As to Dr. Thaler’s theory that under the work made for hire (WMFH) doctrine, he could claim copyright in the work generated by the AI he owns, the court is clear that this misreads the principle. In plain terms, under WMFH, rights transferred to the hiring party must exist in the first place, but those rights can only be vested in a human being upon creation/fixation of a work. No human author means there are no rights to transfer to a hiring party.

Although the Thaler decision is not surprising, it is important because it reaffirms a core doctrine as both case law and policy evolve in response to GAI. By affirming the boundary that 100% machine-generated expression is not protected, this solidifies the framework in which courts to do what they often do in copyright cases—namely to separate protected expression from unprotected elements in a given work.

The more compelling and trickier question as to what is protected and not protected when an “author” uses a generative “machine” as a tool is now active in the District Court for the District of Colorado. As discussed in this post, artist Jason Allen presents a plausible argument that he used Midjourney as a tool to create and fix his mental conception of a visual work of expression. Arguably, Allen v. Perlmutter will be the first case to write early guidance for the use of GAI to create works that may be protected. As such, that outcome just might be surprising and important.


Photo by: Designer491

Too Big to Care: Should Online Platforms Remain Unconditionally Immunized by Section 230?

230

In the current political climate, it is important to clarify that no sensible Section 230 reformer proposes abolishing the statute or endorses threats to revoke the law on the basis of inapt and inaccurate allegations of “content bias.” Section 230 is not a content neutrality law, and statements to the contrary are political theater.


Whether online platforms are too big to care is both a cultural and a legal question. Regarding the latter, “care” refers to a duty of care as applied in common law torts. When I opined on Bluesky that the “Good Samaritan” principle articulated in Section 230 of the Communications Decency Act (CDA) implies that interactive computer service providers (ICSPs) owe a duty of care to those who use their services, this drew a flurry of both civil and not-so-civil critiques about my lack of tort law knowledge. But I am grateful for those exchanges because a crash course in torts offers a practical context for considering §230 reform and that tedious, rhetorical question—What can site operators do to mitigate abuse of their platforms by users?

ICSPs can do a lot. The capacity of the major platforms to micro-target individuals with “information” and alter the course of world events is a matter of record, but even platforms smaller than Meta and Google can accomplish more than they claim while doing less than they could to mitigate harm stemming from well-known abuses of their services. The example I will use in this post, the dating site Grindr, has a market cap of about $3bn, and if it can’t do what I describe below, it’s not because it can’t afford it. More plausibly, it is because the unconditional immunity of §230 does not incentivize good-faith practices as the law intended.

Section 230’s Purpose was to Encourage Harm Reduction

Section 230 was written to address a difficulty first recognized by former Representative Christopher Cox in response to a pair of lawsuits against ICSPs in the mid-90s.[1] The problem Cox noted was that if “editorial control” of user-posted content imputes “publisher liability” to the hosting ICSP, this would disincentivize all platform moderation. With Congress particularly focused on pornography and defamation, Cox and Sen. Ron Wyden drafted §230 as part of the CDA in1996.

The title of §230 is “Protection for private blocking of offensive material,” and the law contains two operative parts under the subtitle “Protection for ‘Good Samaritan’ blocking and screening of offensive material.” The first operative part states that an ICSP is not the “publisher” of communications posted by other parties. The second operative part states that good faith efforts to block or restrict access to “offensive”[2] material shall not result in the ICSP being treated as a “publisher” and thereby impose liability upon the platform for communications posted by other parties.

The words blocking and screening clearly indicate the actions Congress intended to encourage with §230,[3] but in the 20 years since the law was enacted, the courts have interpreted §230 immunity to apply unconditionally—i.e., regardless of whether the platform owner makes a good-faith effort to block or screen harmful communications. As Dr. Mary Anne Franks, in her 2024 testimony before the House Committee on Energy and Commerce, stated:

Put simply, a law cannot incentivize the rendering of aid if that law is interpreted to confer the same benefit upon those who render aid and those who do not. Interpreting Section 230 to shield online intermediaries from liability even when they are indifferent to or benefit from harm actively undermines Good Samaritan behavior and flouts the policy decision made by Congress.

The Tort of Negligence and Property (Premises) Owners

Common law torts, a subject taught to first-year law students (some who never return to it in their careers) address various civil complaints with which we non-attorneys are generally familiar. Based on my crash review of the subject, torts come in three main flavors—intentional harm, negligence, and product defects. Here, I will mainly focus on negligence for considering the contours of ICSP liability on the theory that we should finally stop treating web platforms as undefinable realms “beyond the weary laws of man.”[4]

Because from the perspective of societal interest, ICSPs are private properties that should not be exempted from tort law principles simply because they happen to be virtual properties. This analogy to physical premises was the idea that prompted this post, but it turns out that Kassandra C. Cabrera, a former law student of Dr. Franks’s, wrote a paper in 2021 advocating a premise liability theory for ICSPs. She writes, “Historically, courts were disinclined to impose a duty on landowners to protect against third-party conduct because of difficulty in finding proximate causation. However, the duty under the theory of premises liability extends to landowners when the potential harm from third-party conduct is reasonably foreseeable.”  [emphasis added]

As an example of what is reasonably foreseeable, we are all acquainted with the unremarkable fact that glass containers will inevitably fall off grocery store shelves and create slipping hazards. Under the tort of negligence, the factors that apply to the store owner’s duty of care are as follows:  first, we visitors to the property are called “invitees,” a term of art which means that our presence provides a benefit (typically commercial benefit) to the property owner; and second, our “invitee” status imposes a high duty of care on the property owner, which includes a duty to “inspect” the property and a duty to “make the property safe.” Mere warning of a hazard is not sufficient; expeditious steps must be taken to remedy the hazard.[5]

A property owner may or may not be exempted from his duty of care based on the conduct of invitees. For instance, if a toddler knocks a jar off a shelf, the store owner still has the duty to inspect and make the aisle safe. Alternatively, if a violent customer throws a jar at another customer, the store is unlikely to be liable for the assault, but it still has a duty to act depending on the circumstances and certainly to remedy any hazards resulting from the altercation. Under no circumstance in this context will the owner be able to say, “We’re just a grocery store. How people misuse it is not our concern.” Yet, in general, this is how ICSPs are permitted to operate thanks to §230 immunity.

The duty of care for a grocery store manager does not require special skills or knowledge, and so the owner is held to the “reasonable person” standard of care. Relatedly, the duty to inspect will be venue specific and reasonable. For instance, the owner of a pick-your-own farm has a duty to examine the property from time to time for hazardous fallen branches or debris that visitors might leave behind, but if some lunatic buries a landmine on a path, the farmer cannot be expected to anticipate such an outlandish abuse of his property.

In this regard, the reasonable expectation that an owner can foresee and inspect for certain hazards will vary based on both the nature of the property and the relevant expertise of the owner. A hospital, for instance, is held to a high standard of duty of care due to both the nature of the property and the expertise of those who manage the property. Arguably, an ICSP, which is developed and managed with expertise in computer science, could be held to a standard somewhere between the average “reasonable person” and the medical professional. But this is a question for a court that can only be addressed where a complaint is allowed to proceed past the §230 immunity shield.

Virtual Properties Are Properties for Purposes of Liability

In premises liability, courts take into account factors that could have prevented the injury–for example, the policies in place at the premises and the characteristics of the perpetrator of the crime or intentional tort. Similarly, these factors should be used in the proximate cause analysis in the online context. – Cabrera

Consider the virtual property of the dating/hook-up site Grindr and one invitee’s abuse of the platform that placed a former invitee in grave danger. Specifically, Juan Gutierez created multiple spoof accounts impersonating his ex-boyfriend, Matthew Herrick, and used those spoofs to induce men to violently assault Herrick. “[Gutierez] used the app to arrange sex dates with over a thousand men at [Herrick’s] home and workplace, many of which were under the impression that Matthew wanted to role play rape fantasies,” writes Herrick’s attorney Carrie Goldberg.

Herrick sued and lost on the basis of product liability, but here, it is reasonable to consider the same facts while thinking of Grindr as private property—not because Herrick would have overcome the current interpretation of §230 immunity in his claim—but because premise liability provides a rationale and operational process for the kind of proactive conduct §230 was intended to incentivize. Consequently, it serves as a guide for statutory reform.

The typical PR/legal defense of Grindr (and by extension all ICSPs) is that the site owner is not responsible for the actions of Gutierez any more than the store owner would be for the customer throwing jars or the farmer would be for a psycho planting a landmine. This is fair to a point but also a misdirection because Grindr has both reason and ability to “inspect” its property for the kind of spoof accounts that were weaponized against Herrick. Further, the duty of care should have been elevated once Herrick informed Grinder of the circumstances, but Grindr’s lack of incentive to “inspect” in the first place, and decision not to provide aid in the second, were both supported by its §230 immunity.

What ICSPs Can Do

Any claim that Grindr, or other ICSPs, cannot anticipate commonly known hazards should be viewed skeptically. Spoofing is so common that it is as foreseeable for the platform owner as falling jars are for the grocery store owner. Further, spoofing is just one example of platform abuse for which an owner could “inspect,” especially with advancements in agentic AI. Peter DeMeo, chief product officer of the confidential computing company Phoenix Technologies AG in Switzerland describes a few ways in which AI agents could be used to combat spoofing:

AI agents can detect spoofing accounts on social media by analyzing multiple factors, starting with profile metadata and activity history, then checking for incomplete profiles, generic names, and/or inconsistencies in user information. Suspicious accounts often have recent creation dates and sudden bursts of activity, which can indicate malicious behavior. By tracking IP addresses and device fingerprints, AI can detect multiple accounts operating from the same source, revealing possible coordinated manipulation. AI can also cross-check posts against user bios to spot inconsistencies. By leveraging network analysis, AI can map relationships between accounts to detect suspicious clusters of activity.

DeMeo notes that AI agents can become attack vectors themselves, which is why his company focuses on providing secure environments for the development of agentic AI. But he also notes that anti-spoofing agents can be a “specialized industry,” meaning that imposing a duty of care to “inspect” virtual properties is, like most challenges, a business opportunity.

One argument against the spoofing example from defenders of §230’s status quo might be that Grindr could inspect for 10 spoofs of one Matthew Herrick but could not inspect for any spoofs of one out of a million Joe Smiths. But at least three responses should rebut this and similar generalizations that the volume of data for a given platform is too vast to “police.”

First, DeMeo’s examples (which are not exhaustive) indicate that an AI agent can analyze a lot more than two data points like first and last name. Second, sensible §230 reformers, including Dr. Franks and her colleague Professor Danielle Citron, recommend that immunity be conditioned on “reasonable content moderation practices,” not on 100% perfect results. Third, §230 immunity itself perpetuates the opaque management of ICSPs by dismissing even meritorious complaints before they reach the discovery phase.

Sites as Properties vs. Sites as Products

I focus on negligence in context to private property because it seems to be the most applicable and reasonable way to think about platforms where our visitation provides the essential benefit that makes platforms worth billions. Further, the duties to “inspect” and “make safe” strike me as more generalized (i.e., more likely to serve greater societal interest) than product liability, which can be limited by those voluminous, complicated terms of service nobody reads—and which are subject to change electronically. On the other hand, product liability extends to harm done to parties who are not “invitees,” as indeed Herrick was not a user of Grindr at the time Gutierez abused the site to induce assault.

In Herrick’s case, some of his product liability claims were found to fail on the merits, regardless of §230, but with specific reference to the spoofs used by Gutierez, the opinion of the Second Circuit states:

Herrick alleges that Grindr is defectively designed and manufactured because it lacks safety features to prevent impersonating profiles and other dangerous conduct, and that Grindr wrongfully failed to remove the impersonating profiles created by his ex boyfriend… Those claims are based on information provided by another information content provider and therefore satisfy the second element of § 230 immunity….It follows that the manufacturing and design defect claims seek to hold Grindr liable for its failure to combat or remove offensive third-party content, and are barred by §230.

Defenders of §230 status quo will argue the court gets it right in Herrick, which further supports advocacy for statutory reform based on sensible review of the contemporary internet in contrast to 1996. As Professor Olivier Sylvain writes in his paper Intermediary Design Duties:

Today, the largest online companies do not merely host and relay messages, uninterested in what their users say or do. They use behavioral and content data to engineer online experiences in ways that are unrelated to the charming interest in making connections. Some of the most successful companies, moreover, collect, analyze, sort, and repackage user data for publication in ancillary and secondary markets. This is how the CDA immunity doctrine, first developed by the courts two decades ago, is ill-suited to the world today.

What the opinion in Herrick exemplifies is that no principles of tort liability pass through the §230 shield when “Information [is] provided by another information content provider.” That is the crux of the §230 problem and our dysfunctional relationship with ICSPs in general. We have been conditioned to think of all communications as “information,” or worse, as “protected speech” even when the communication is arguably conduct intended to cause or induce physical, emotional, or economic harm—from doxxing women who speak their minds to exes seeking revenge to inciting political violence, etc.

Indeed, this is why Franks and Citron propose striking the over-broad word information in the statute and replacing it with the legally definable word speech. “The revision would put all parties in a Section 230 case on notice that the classification of content as speech is not a given, but a fact to be demonstrated,” their proposal states. It is challenging enough for, say, a victim of image based sexual abuse (IBSA) to present a claim against an ICSP at all, but this one word change in the statute would foster a more level playing field between the individual plaintiff and corporate defendant.

Statutory Revision Should Restore the Incentive to Care

If we agree generally that an ICSP should have a duty of care, experts and Congress can decide whether the general obligation is properly construed under product or property liability—or some combination of the two. But whichever path is most reasonable, ICSP owners should no longer be allowed to exploit ambiguous definitions of their platforms to perpetuate the harmful results of unconditional immunity. Further, sound §230 reform must acknowledge the most extreme cases in which ICSP owners deliberately foster or profit from harmful communications and should, therefore, be subjects of intentional tort liability claims.

Some defenders of §230 status quo will argue that suing the Grindrs of the world for damages won’t help solve the problem for the Herricks of the world, but this is unfounded. First, one purpose of tort law is to foster better conduct among especially commercial enterprises. It is an imperfect remedy for various reasons, but if it is wholly ineffective for altering ICSP conduct, why do they fight so hard to maintain the status quo? The answer, I believe, lies in the second, and more important point—that §230 has barred case after case from proceeding to the discovery phase.

Even where a plaintiff may not ultimately prevail and receive damages, discovery in a lawsuit is often how the public learns whether an operation’s practices are fair, safe, lawful, or even consistent with their own warranties. “Even when a plaintiff’s case fails on the merits, judi­cial engagement with the details of her claim helps to frame her suffering as a legible subject of public attention and governance,” writes Professor Douglas A. Kysar. [6]

For example, when whistleblower Frances Haugen testified in 2017 that Meta makes decisions based on profit over safety, the devil’s details underlying that exemplary statement might only be revealed by adjudicating a reasonable claim brought by a plaintiff directly harmed by Meta’s decisions. Instead, Section 230 short-circuits this process, providing unconditioned immunity that is not only uniquely tailored to one industry, but bizarrely applies to the owners of virtual properties where many citizens live half their very real lives.

Despite the length of this post, there are several aspects of Section 230 not discussed, including responses to various arguments for maintaining its status quo and discussion of specific cases that would likely have been adjudicated if not for the shield. Nevertheless, various hearings in Congress have signaled bipartisan fatigue with the status quo, especially where harm to children is involved. Whether that sentiment can be harnessed into reasonable reform is a fair question—especially in the current climate—but as Carrie Goldberg reiterated in one hearing, the Constitution requires that plaintiffs, even if they might lose, deserve their day in court.


[1] Cubby, Inc. v. CompuServe, Inc. and Stratton Oakmont, Inc. v. Prodigy Servs. Co.

[2] The statute states “… to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

[3] “The original purpose of this law was to help clean up the Internet, not to facilitate people doing bad things on the Internet.” NPR citing Cox.

[4] A dig at John Perry Barlow’s Declaration of the Independence of Cyberspace.

[5] A lower standard duty of care is owed to “trespassers” or “licensees” who may visit a property, though Cabrera notes that courts have often collapsed the distinction between “invitees” and “licensees.”

[6] Franks, The Free Speech Industry, pp 70-71.