Site Blocking Is Effective Worldwide Says New Report by IP House and DCA

site-blocking

Overseas and Out of Reach:  International Video Piracy and U.S. Options to Combat It, released today by IP House and Digital Citizens Alliance (DCA) is one more reason the U.S. Congress should adopt site-blocking legislation to protect American creators and consumers.

Thirteen years ago this coming January, Congress shelved bipartisan legislation that was designed to restrict foreign-based criminal enterprises from access to American consumers. Generally referred to as “site-blocking,” the focus was (and remains) combatting media piracy operators who illegally distribute or perform motion pictures, music, publications, etc.—most of which are produced in the United States. In 2011/12, Silicon Valley funded a multi-lateral disinformation campaign that frightened people into believing that site-blocking would chill speech, sidestep due process, and “break the internet.”

None of those allegations were true then, and if Congress revisits site-blocking, which it should, lawmakers can rely on the new IP House report showing that more than 50 countries have implemented some form of this piracy mitigation strategy without any of the negative consequences foretold by Big Tech and its network of hacktivists. Unsurprisingly, the report reveals that speech rights, due process, and functioning internet services persist in nations that have had site-blocking in force for about a decade or more.

Contrary to those who predicted that more access to more media would reduce piracy, Americans have more access today than they have time to consume, and yet piracy grew by 36 percent between 2021 and 2022, during which time, 13.5 billion visits to film and TV piracy sites originated in the U.S. Meanwhile, to those who claimed that site-blocking was too risky because piracy cannot be restrained, the report demonstrates that site-blocking measures have resulted in increased traffic to legal platforms for media entertainment.

Three separate studies—focused on the United Kingdom, Portugal, and Australia—found that when sites were blocked, traffic decreased to those sites. The decrease was substantial; traffic decreased by 89 percent in the United Kingdom, 70 percent in Portugal, and 69 percent in Australia.

Even if skeptics choose to doubt that, say, Russia is a reliable speech-right and due-process country, fair enough; but Australia, Canada, the UK, France, Germany, and Sweden are among the nations with site-blocking measures in force while reporting no harm to protected speech, the functioning of the internet, or the kind of indiscriminate over-blocking that Big Tech and its “digital rights” allies insisted would be inevitable. Of course, much of that hyperbole has ebbed amid a more sober understanding that the internet is not the boon to democracy Google et al. proclaimed. So now, perhaps, we can have a sober discussion about the rationale for site-blocking and how it is implemented.

How Site-Blocking Works

As the new report describes in detail, most sophisticated pirate platforms operate between the shadows of online anonymity, in physical jurisdictions beyond the reach of U.S. law enforcement, and “in concert with other criminal entities.” As a $2bn+ industry, these enterprises have the resources to build nimble, complex systems, and so, shutting down one of the major operations and/or convicting the owners is nearly impossible—even with cooperation among friendly nations. For instance, the infamous Megaupload founder Kim Schmitz (Kim Dotcom) was arrested in New Zealand in 2012, but it was only this past August when that government agreed to extradite him to the U.S. to stand trial.

In response to the challenge of stopping “out of reach” criminals, site-blocking prevents, or at least limits, foreign illegal platforms’ capacity to reach consumers in the target nation. To implement a block, a complainant party (e.g., a major owner of IP being infringed) bears a high burden of proof to show (in the U.S. it would be a federal court) that a particular site is dedicated, or substantially dedicated, to mass piracy. If the court orders a block, the major ISPs in that nation are then instructed to restrict access through various means like DNS blocking, blacklisting URLs, etc., depending on the nature and structure of the pirate operator.

Piracy is About Harm to Creators and Consumers

Nearly 80 percent of piracy sites delivered malware-ridden ads to their users….More than half of the $121 million generated ($68.3 million) from malvertising came from U.S. visits to these sites.[1]

Even if site-blocking were solely about mass theft of creative works, it is absurd that the U.S., as the world’s largest and most diverse producer of such works, lags so far behind other nations in adopting this commonsense strategy to mitigate harm to American businesses. But in addition to the new report’s evidence that site-blocking has been effective without significant negative consequences, Congress must also recognize that both media piracy and cybercrime in general have become more sophisticated in the last decade.

For instance, two of the more popular modes of media piracy are the video on demand (VOD) and internet protocol TV (IPTV) models whereby operators sell subscriptions to platforms that look like Netflix or Hulu, but which stream and/or enable downloads of media files that are obtained and stored illegally. Many consumers are aware that these sites are piracy-based, but because the platforms look and feel like legit platforms, many consumers may not be aware that they are paying criminal enterprises, making themselves vulnerable to cyber-attacks and/or supporting a broad range of unsavory activity, including extortion, narcotics, human trafficking, and terrorism.

Al-Manar is a Lebanese television outlet operated by the extremist political party Hezbollah and is banned from operating in the United States after the U.S. government labeled it a “Specially Designated Global Terrorist entity. Nevertheless, Al-Manar was offered on at least half of the piracy IPTV services…

Several DCA reports have presented substantial evidence of a nexus linking mass media piracy to organized crime, and as this new report states, “The more profitable piracy is, the more likely organized criminals are or will become involved in it.” Past reports have shown that platforms are major vectors for malware, including ransomware and remote access trojans (RATs) used to slave computers, or that visitors to pirate sites are “disproportionately vulnerable to credit card fraud.”

Meanwhile, it is hard to miss the fact that buying ordinary products online today, even on major ecommerce platforms, requires heightened vigilance to avoid counterfeits that may be faulty or dangerous.  Add to this chaos the potential of AI to amplify a broad range of assaults on American institutions, businesses and consumers, and it is clearly a moment for Congress to fan away the dust of the “Stop SOPA” campaign of 2012 and reaffirm that site-blocking is a practical tool in defense of the public interest. “The lack of evidence of abuse suggests that site-blocking orders are fair, rigorous, and issued only in legitimate cases of large-scale piracy,” the report states. That was predictable more than a decade ago. Time to catch up.


[1] IP House report citing this article.

Image source by:



No FAKES Act Matched in House Bill to Address Gen AI Replication

no fakes

On Monday, beloved actor James Earl Jones passed away at age 93, but in 2022, he signed an agreement with LucasFilms to allow the voice of Darth Vader to live on through Gen AI replication. Jones’s permission to replicate his voice is a bittersweet prelude to today’s news from Capitol Hill, where the House of Representatives introduced its own No FAKES Act to prohibit the unlicensed replication of any person’s likeness or voice. Sponsored by Reps. Salazar, Dean, Moran, Morelle, and Wittman, the House bill is identical to the Senate No FAKES Act introduced in late July and, so, demonstrates a bicameral, as well as bipartisan, sense of urgency to address misuse of Gen AI for this purpose.

To recap, No FAKES establishes a new property right in the likeness of any person and prohibits unauthorized replication of a likeness, which includes voice. Historically, likeness has only been protected on a limited basis by a patchwork of state Right of Publicity (ROP) laws, typically prohibiting unauthorized use of a celebrity likeness for commercial/advertising purposes. But the unprecedented capability of Gen AI to be used by anyone to replicate the likeness of anyone—and which will exacerbate the reality-bending world of online “information”—has prompted Congress to move swiftly and, in my view, creatively.

It was July 2023 when the idea of a federal ROP law was discussed during a hearing held by the House Judiciary Committee Subcommittee on Intellectual Property. At the time, I imagined this was a prelude to years of haggling on Capitol Hill while Gen AI developers proceeded at internet speed to wreak havoc with tools to produce more advanced “deepfakes.” Instead, the introduction of No FAKES in the Senate just one year later—and now, the same bill in the House less than two months after that—reveals both seriousness and deftness in legislators’ zeal to confront the issue. Rather than approach the matter as one to be remedied by a federal ROP law, Congress, with input from various stakeholders, has responded to the novelty of the challenge with novel legislation, drawing upon principles found in ROP, trademark, and copyright law.

If passed, No FAKES would operate akin to ROP, but it automatically applies to every citizen, and unlawful replication is not limited to commercial/advertising purposes. At the same time, because many misuses of Gen AI replication have both reputational and commercial implications, No FAKES shares a kinship with trademark, which is a creature of the Commerce clause. And finally, the new right is copyright-like as a property right which vests in the individual, may be licensed for various uses, and is descendible to heirs and assigns with certain limits and conditions unique to protecting likeness.

Opposition Is Familiar but the Battlefield Is Different

Many of the usual suspects representing Big Tech, including the newly formed (I can’t believe they called it this) Chamber of Progress, will likely raise constitutional challenges to No FAKES, leaning hard into the refrain that the new likeness right will chill protected speech. As to the merits of that argument, the text of the bill already includes well-crafted, First Amendment-based exceptions; and as a PR message, I believe Big Tech is refreshingly at a disadvantage. Concerns over abuse of Gen AI encompass a broad range of Americans—from professional creators to parents seeing how easily children can be sexually exploited—and in general, people just aren’t buying Big Tech’s “make life better” rhetoric anymore.

Examples of legitimate innovation (e.g., Jones permitting Darth Vader to continue, or Randy Travis overcoming physical voice loss) will entail permission of the person whose likeness or voice is being replicated. Yet, in response to the many harms which may be caused by unlicensed Gen AI replication, AI defenders will promote the overbroad refrain that “innovation” must be allowed to flourish — but of course, “innovation” is Big Tech’s euphemism for “profitability at any cost.” Congress is still playing catch-up to address myriad harms fostered by pre-AI social media and is, therefore, reluctant to repeat the mistakes of the late 1990s by allowing Gen AI “room to grow” without restrictions.

Interestingly, Chamber of Progress appears designed to frame the multi-billion-dollar AI gamble as socially and politically “progressive,” a strategy belied by its advocating broad liability shields for AI developers akin to Section 230 of the CDA and Section 512 of the DMCA. In fact, that view aligns perfectly with Open AI CEO Sam Altman suggesting that it is impossible to develop without free use of copyrighted works, or with investor Marc Andreesen writing a smug and erroneous manifesto as a plea for continued laissez-faire policy in all things tech. If there is anything “progressive” about Gen AI, Chamber of Progress will need to produce more than worn out rhetoric to prove it.

We’ve been here and done this, but No FAKES is a bill with a lot of political momentum. The likelihood that many citizens will oppose a prohibition on the unlicensed use of their own, or their children’s, likenesses seems low to the point of futility. We’ll see what comes, but by my lights, No FAKES is destined to become law.


Image by: nikolay100

TikTok Inspired Child Suicide Prompts a Sound Reading of Section 230

Section 230

Last week, the Third Circuit Court of Appeals issued an opinion regarding Section 230 of the Communications Decency Act. It may be the strongest affirmation to date that the statute does not provide a blanket liability shield for all social platforms regardless of their conduct. Specifically, §230(c)(1) only immunizes platforms for liability that may arise from other parties’ speech, not from the platform’s own speech. And although the platforms have sought to argue that their “recommendation” algorithms, which push content to users, do not constitute speech, the courts aren’t buying it.

In the case Anderson v. TikTok, the appeals court reversed the lower court finding that the platform was automatically immunized against a liability claim involving the death of a child who attempted one of the many dangerous “challenges” that appear on social media. In this case, Nylah Anderson, age 10, died by accidentally hanging herself when she tried the “Blackout Challenge,” which dared people to asphyxiate themselves until they passed out. At issue for TikTok is not the challenge itself, started by an unknown third-party, but the “For You Page” algorithm which “recommended” the challenge to Anderson. Judge Matey, in a strident concurrence with the circuit court opinion, writes the following:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Though the reference to St. Augustine implies a religious moralizing I might omit, Judge Matey’s accusation that social platforms host a “cauldron” of dangerous, illegal, and depraved material behind a veil of social good and constitutional rhetoric is indisputable. As a legal matter, had Anderson discovered the video challenge (e.g., via search), TikTok would likely be immunized by §230, but because a “recommendation” algorithm factored in the child’s conduct resulting in her death, this is an important distinction that could more clearly articulate a shift in judicial review of the statute and, we should hope, an overdue change in platform governance.

As Judge Matey further states in his concurrence, TikTok’s presumed immunity under §230 in this case is “…a view that has found support in a surprising number of judicial opinions dating from the early days of dial-up to the modern era of algorithms, advertising, and apps.” That view is properly dimming now, and by my reckoning, the Supreme Court will go where the Third Circuit went last week. In a pair of nearly identical cases, Gonzalez v. Google and Twitter v. Taamneh (2022), the plaintiffs, on behalf of victims of two ISIS-related terror attacks, sought to hold the platforms accountable for “recommending” ISIS recruiting videos. But because those claims relied substantially on meeting the standard for “aiding and abetting” under criminal law, the Court found little plausible claim for relief and, therefore, declined to address the question of §230 immunity.

But if Anderson (or a similar case) goes to the Supreme Court, I believe the justices will have little difficulty finding that a “recommendation” algorithm promoting a video challenge that led to a child’s death is a foundation for a liability case to proceed. As the Court stated in Taamneh, “When there is a direct nexus between the defendant’s acts and the tort, courts may more easily infer such culpable assistance.” In Anderson, with no other party acting as the direct cause of the child’s death, the facts are even simpler, revealing a clear nexus between the video challenge “recommended” by the platform and the accidental suicide. Further, this July, the Court held in the unanimous Moody v. NetChoice decision that social platforms “shape other parties’ expression into their own curated speech products.”[1] Under that rule, the Third Circuit finds that TikTok’s “recommendation” of the Blackout Challenge to Niyah Anderson plausibly constitutes the platform’s own speech, for which it may be held liable.

The reason I keep putting “recommended” in quotes is that at the time SCOTUS granted cert in the Taamneh and Gonzalez cases, I wrote a post opining that the courts, policymakers, et al. should take a jaundiced view of this too friendly term to describe an insidious function of social media. It is no longer controversial to say that platform operators manipulate what users see and hear, or that this manipulation can lead to disastrous results from disinformation campaigns in the political arena to drug-related deaths to suicide by little girls.

It is a familiar refrain that it takes a tragedy, or many tragedies, to change policy, and with the story of Nylah Anderson, and the many young victims she represents, we may finally see Big Tech’s hypocrisy on speech collapse under the weight of its own absurdity. The major platforms have played games with the First Amendment and §230 for nearly 20 years—conflating their business interests with users’ speech rights or asserting their own speech rights when necessary or asserting that nothing they do is their own speech—all depending on which potential liability the company seeks to avoid. Further, that confusion has not been helped in recent years by certain politicians who misstate the operation of the speech right to create political theater around allegations of bias.

Out of all that mess, it is notable that Justice Thomas, since at least 2020,[2] has restated the observation that online platforms will avail themselves of constitutional protection to engage in conduct like algorithmic “recommendation” but then invert the argument to shroud itself in the §230 shield. And then, the courts will stop a liability claim from even proceeding. As Congress, the Supreme Court, and now the Third Circuit have all reiterated, no industry in the country enjoys that kind of immunity, and perhaps this claim against TikTok will be the case that finally ends this unfounded and unreasonable privilege for online platforms.


[1] On a side note, this is reminiscent of the “selection and arrangement” doctrine in copyright law, which finds “expression” in the choices made by the author who engages in that conduct. All copyrightable expression is a form of speech.

[2] See dissent on the grant of certiorari in Malwarebytes v. Enigma.

Photo by: