Committee Talks Sunsetting Section 230 to Prompt Action by Big Tech

Section 230

Yesterday, the House Energy & Commerce Committee held a hearing to discuss draft legislation that would sunset Section 230 of the Communications Decency Act on December 31, 2025. If passed, the law would start a countdown toward abolishing Section 230 with the real intent to force Big Tech to cooperate on meaningful reform. Said reform would seek to mitigate the worst harms facilitated by misapplication of the law as a blanket liability shield for online service providers (OSPs) who host user generated content (UGC).

The Committee heard from witnesses Carrie Goldberg, a prominent attorney for victims of cybercrime; Marc Berkman, CEO of Organization For Social Media Safety; and Kate Tummarello, Executive Director of Engine, an advocate for “pro-startup and pro-innovation policy.” Goldberg and Berkman testified in favor of the sunset proposal, and Tummarello testified against the bill, though she stated more than once that her constituents are not opposed to Section 230 reform.

Thus far, most congressional hearings about “holding online platforms accountable” have been political theater, and this hearing was no different, other than the fact that the sunset proposal is overtly theater. Committee members acknowledge that the goal is not to abolish Section 230, or at least not its original intent, but we shall see whether the sunset bill becomes law and, if so, whether it compels the tech giants to negotiate in good faith.

In the meantime, it would help if Congress would stop echoing Big Tech’s main talking point—namely that Section 230 is about free speech, let alone speech neutrality. While most Committee Members reflected understanding about the serious harms facilitated by the erroneous application of Section 230, a few Members made parenthetical comments about protecting speech, and Rep. Harshbarger (R-TN) opined that “liberal sites like Facebook” censor “conservative” content.

Aside from this recurring allegation being unfounded in fact, Section 230 has nothing to do with the speech right or with viewpoint neutrality. Indeed, if it did—if Congress wrote a law mandating content neutrality, THAT would be a violation of the First Amendment. As Goldberg stated during questioning, “The platforms are free to moderate however they want.” So, every time Congress mentions speech in context to Section 230, it only amplifies Big Tech’s big lie that their platforms are an “engine of free expression,” which is unhelpful to sensible amendment of the law.

To clarify that point, yes, platforms host a lot of expression, but the OSPs are not bound to foster content neutrality by the First Amendment, and they do no such thing by operation of their sites. It is a matter of record that the social platforms adjust their algorithms to push or demote content for users based on user behavior in a constant and dynamic interplay between the two. The goal of these operational decisions has nothing to do with the speech right—indeed one can argue they stifle speech in several ways—and everything to do with maximizing profitability for the platform.

Next, it would be great if Congress could keep its eye on the ball and remember that Section 230 reform is not about creating new direct liability for all online platforms for harm done by users. To put it bluntly, Section 230 reform is about instructing the courts to stop tossing out every claim and every prayer for injunctive relief, solely on the basis that the statute requires this result at summary judgment. That was never the intent of the law, but the courts’ conclusions to the contrary demand that Congress act.

Nevertheless, during Q&A with witnesses, some Members seemed either to mischaracterize 230 reform as new regulation or as opening the door to a flood of direct liability claims. Here, Tummarello, as a representative for startups, stressed the fact that small companies cannot compete with the giants by moderating every post and comment on their sites. Frankly, the tech giants can’t achieve this goal either, but this is part of the theater because the concern is only relevant if Section 230 is indeed abolished.

By contrast, reforming the law does not need to oblige every platform to catch every potentially harmful bit of content. Indeed, sensible and workable reforms have been proposed by, for instance, Danielle Keats Citron recommending small but significant changes to the language. The goal is to retain the original intent to shield “Good Samaritans” against wanton lawsuits while directing the courts to find that relevant facts can void the liability shield.

For example, one of Carrie Goldberg’s high-profile cases involved a man named Juan Gutierez who used the dating app Grindr to target Matthew Herrick for harassment, abuse, and physical violence. Gutierez created a fake account pretending to be Herrick and invited random men to find him and fulfill his “rape fantasies.” Section 230 has nothing to do with the conduct of Gutierez, who was convicted for his crimes, but the law shielded Grindr from even going to court, despite Goldberg presenting evidence that volitional conduct by the platform caused and exacerbated the harm to Herrick. In short, Goldberg et al. are simply asking Congress to instruct the courts to allow meritorious claims against OSPs to be litigated—just as with any other defendant operating any other type of business.

Equally frustrating in this regard is the importance of injunctive relief, and I was surprised not to hear it come up during the hearing. Amid all the talk about Section 230 “fostering innovation” by shielding startups from a flurry of lawsuits, people lose sight of the fact that a platform need not be directly liable, or even a named party to a suit, to simply do the right thing and remove harmful material upon request. Unfortunately, the culture and profit motives of OSPs too often resists removing any material ever, and Section 230 has prevented courts from ordering those removals to mitigate harm to victims.

Presumably, there will be some wailing and teeth-gnashing from the usual suspects who defend the status quo of the “internet as we know it.” The EFF already groused about the sunset proposal ahead of the hearing, and we’ll see who else joins that peanut gallery. Either way, it is frustrating to know that meaningful reform can be achieved by changing a few key words in the statute—words that would maintain the original intent of Section 230 but which would stop protecting platforms over people. As Carrie Goldberg testified, the Seventh Amendment demands that victims of sexual abuse, trafficking, drug-related scams, harassment, and other devastating harms must all have their day in court.


Image Source by: Budi49673

Voice actors, tricked by LOVO into creating AI replicas, file suit.

LOVO, Inc.

A class-action suit was filed last week by voice actors Paul Lehrman and Linnea Sage against AI developer LOVO, Inc. According to the complaint, LOVO induced the actors to provide recorded material under false pretenses—material which was then used to produce synthetic replicas of their voices to become part of a catalog offered to paying customers. The complaint also alleges that LOVO defrauds its customers who believe they are using voices that have been legally obtained.

Both Lehrman and Sage were contacted via the freelance hub FIVVER and solicited for voice work. Both asked about the ultimate uses of the recordings—a standard question which affects the price an actor will charge—and both were lied to, according to the complaint. Lehrman was told that the recordings would be used exclusively for academic research, and Sage was told that hers would be used to produce “test scripts” for radio spots. In fact, both anonymized parties who contacted the actors were employees of LOVO—co-founder Tom Lee allegedly contacted Sage—and the sound recordings were used to train the company’s AI to create replicas of Lehrman and Sage’s voices.

Not only were the actors’ voices added to LOVO’s catalog, but the company also used the replicas for its own marketing and capital-raising purposes, renaming Lerman and Sage as “Kyle Snow” and “Sally Coleman” respectively. For instance, the complaint alleges that Sage’s voice was used in demonstrations to raise millions of dollars in venture capital, and Lehrman discovered LOVO promoting the narration attributes of “Kyle Snow” in its own article entitled “The 5 Best Male Voices for Text to Speech.”

The causes of action in the complaint include violations of New York’s right of publicity (ROP) law, deceptive practices, false advertising, violations of the Lanham Act, unjust enrichment, and fraud. Further, the complaint not only alleges harm to the actors but to LOVO’s customers who are misled into believing that the voices being used in their commercial projects have been licensed or otherwise obtained by permission. Plaintiffs seek millions in damages and legal fees.

Before saying anything else, speaking as a guy who is not very spiritual by nature, LOVO has engaged in the practice of soul stealing. Specifically, for Paul Lehrman to read a description of his performance attributes, having had that talent literally stolen and bottled for sale, is a chilling thought. “With his upbeat tone and slightly faster talking speed, Kyle Snow has the perfect voice for conveying enthusiasm and youthfulness,” the description begins. The prospects of AI replacement in the workforce are problematic enough, but imagine reading your own resume and discovering that it’s actually promoting an AI replica of you made without your permission.

The proceedings of this case may prove instructive to many parties with an interest in public policy related to artificial intelligence. The deceptions, if proven, should be damning to LOVO itself, but this case entails considerations that will be worth watching, even where AI development is conducted without lying to obtain training material. Specifically, many interests are looking at state ROP law as a basis for expanding related protections for all individuals. This complaint cites NYS civil rights law, which “imposes liability on a party for misappropriating an actor’s voice ‘for advertising purposes or for the purposes of trade without … written consent[.]’”

The use of a performer’s likeness for advertising purposes is central to many ROP laws in the states that have such statutes, but this case implies taking a more universal approach to proscribing the reproduction of anyone’s likeness for almost any purpose without permission. In fact, the sci-fi thriller quality of transferring not just the technical sound of a voice, but the personality of that voice, reaches beyond the concept of “likeness” as applied to date. It adds a new layer of meaning to the crime of “identity theft.”

Creative work is always a combination of natural talent and hard work to develop certain skills, but one need not be an actor, or any kind of artist, for the same principles to apply. If identity comprises thought, emotion, likeness, and movement, which of these attributes must be copied—and with what degree of precision—before “soul stealing” occurs? I don’t know the answer to that, but I will very curious to see what precedents are set by a case like this one.

TikTok Exploits a “Target-Rich Environment” for Drug Scams According to DCA Report

TikTok may be the perfect crucible in which to exploit the frailties of negative body image and then breed scammers who con millions from people looking to obtain drugs for weight-loss. According to a report released today by Digital Citizens Alliance (DCA), a joint investigation with the Coalition for a Safer Web found at least sixty operators, several posing as pharmacies or medical professionals, fraudulently offering to ship the antidiabetic drugs Ozempic or Mounjaro or the weight-loss drug Wegovy.

From report “Ozempic Scams on TikTok: The only thing likely to get lighter is your wallet.”

“The moment is tailor-made for scammers to take advantage of American consumers,” says DCA executive director Tom Galvin in its press release. “An estimated one in six people say they take Ozempic or other weight loss drugs and just as many are considering it. That’s a target-rich environment for criminals and other bad actors. It’s alarming that TikTok allows these scammers to operate so freely.”

Big Tech critic Tristan Harris has compared the Chinese version of TikTok to “spinach” and the version used by the rest of the world to “opium” because the safeguards deployed on the former do not exist on the latter. Harris did not literally mean drug pushing with that analogy and was instead referring to the addictive nature of the platform. But it is little surprise that the TikTok algorithm is used to detect users’ interest in weight loss and then bombard them with promises to deliver name-brand drugs without a prescription.

DCA Report Lays Out the Anatomy of the Scam

First, there’s the offer of a month’s supply, usually for about $200-$400. Next the seller insists upon using cryptocurrency or payment apps like Venmo, Zelle, or PayPal and to process transactions as “friends and family” to circumvent refund mechanisms. In some instances, the scammer will inform the customer of a “holdup in customs,” which can be expedited by a one-time payment. And finally, the customer is asked for a screenshot as proof of payment, which then provides the scammer with information that can be used to directly trigger fraudulent transactions. As the report states:

A day after making a payment to these scammers, one of the investigator’s credit cards was compromised. Nearly $2,600 was charged to Hertz Rental Car. In addition, the investigator received a Zelle fraud alert within hours of a purchase.

Naturally, these pharma-cons move around the platform by creating and shedding multiple accounts and identities to evade what little scrutiny TikTok employs to crack down on these activities. This is, of course, familiar territory. In 2011, Google paid a half-billion-dollar fine to the DOJ for ad revenues it received from rogue pharmacies in Canada. In that case, for better or worse, the pharmacies and drugs were often real, but the transactions were illegal. In the TikTok examples, with so many red flag indicators of a scam, one might think that few consumers would fall for these “offers,” but perhaps the intent to misuse a diabetes drug in the first place (a potentially fatal decision) indicates a willful blindness that is every con artist’s dream. The DCA report states:

It can be said that it’s better for Americans to be duped out of their money than to receive drugs – whether Ozempic, opioids, or steroids – that can threaten their health or even their life. But it’s a sad commentary when the “lesser of two evils” is the choice offered to American consumers.

Regardless of consumer awareness, this new report is yet another example of the failed policy of laissez faire when it comes to social media platforms. If the forced sale of TikTok wrests control of the platform from the Chinese Communist Party, that will bring the new owner into the reach of U.S. law, but what U.S. lawmakers then do to protect American citizens is another matter.


Photo by: SIVStockStudio