D.C. Event Shines Light on Advertisers Supporting Social Media Harm to Children

social media

When I was a kid in the 1970s and my father was a principal in an ad agency, they had the Ameritone paint account, and I remember him explaining that they were not allowed to show paint and food together in a commercial lest a child viewer be confused into thinking that paint might be edible. By contrast, a social media platform today is free to conflate child-focused material with illegal drug offers and numerous other conduits leading to serious harm or death. And it’s all swept under the rug of innovation and commerce.

Algorithms kill kids. Let’s just call it like it is at this point and stop pussyfooting around the rhetoric that social media platforms are neutral platforms for “information.” Never mind that information itself is almost a lost cause on social media, but the effects of algorithmic manipulation—even simple recommendations—can have disastrous effects for children and teens, including depression, anxiety, suicide, and accidental death. And that was before AI.

As reported last September, the accidental suicide of Nylah Anderson, age 10, was the result of TikTok’s algorithm prompting her to try the “blackout challenge,” which entails making a “game” of self-asphyxiation. In the case against TikTok for its role in leading Anderson toward the “blackout challenge,” the Third Circuit Court of Appeals articulated one of the few rational reads of the Section 230 liability shield. The court stated:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Brought to You by Your Favorite Brands

Add to that cauldron the major brands whose advertising dollars unconditionally support social platforms, and that was the focus of this morning’s event held at the National Press Club. “We saw a great turnout,” says cyber-analyst Eric Feinberg, who has been engaged on ad-supported toxic social media content since 2013. More than 40 attendees filled the 40-seat room for the kick-off event designed to focus the attention of major brands on the fact that their ad dollars finance platform operations that cause serious harm and death to children and teens.

The event was organized and hosted by parents who have been working to turn personal tragedy into social change through both public policy and private action. For instance, one mother who spoke was Debra Schmill, who started the Becca Schmill Foundation after losing her daughter Rebecca to fentanyl poisoning from pills obtained with the “help” of social media. Becca’s death was the culmination in a cascade of terrible events intersecting social platforms—beginning with a rape at the age of 15 that was followed by cyber-bullying and the consequent battle with depression that led to the fatal pills obtained online. Deb Schmill is one of many parents determined to prevent other children and families from suffering similar fates.

“Women make 70% to 80% of all purchasing decisions,” Feinberg explained to me by phone after the event, “and these mothers who spoke today recognize that mothers just like them are funding social media harm to their own children.” Posting his daily mantra that “Brands are buying while kids are dying,” Feinberg has recently taken swings at McDonalds for its crossover promotion with Snapchat…

He makes a solid point. If a major brand overtly promoted the opportunity for kids to get closer to the local drug dealer, pimp, or sexual predator, parents would be outraged. But because social media is an insidious free-for-all, inhabited by good and bad actors, the worst vices are either overlooked or accepted as the cost of obtaining the virtues. But this is a false choice. Multiple defectors from these companies have made clear that the platforms bend their own rules and tweak their algorithms to promote anything that drives “engagement,” without regard to the consequences. And they assume the mainstream advertisers will keep paying without condition because they own all that engagement.

But as Meta whistleblower Sarah Wynn-Williams describes in her book Careless People, that company made an affirmative decision to target known teenage psychological vulnerabilities (e.g., body image) to promote certain products. This abuse of the technology is already unethical—a far cry from not showing paint and food on the same screen—and advertisers who knowingly exploit the “opportunity” should be held accountable by consumers. Meanwhile, as the organizers of today’s event strive to emphasize, that same algorithm exploiting the teen’s vulnerabilities will just as readily push dangerous drugs toward the child as promote a makeup product or gym membership.

By my lights, asking the advertisers to partner with their own consumers—the parents who buy their products—to pressure the platforms to adopt better practices is the very least they can do. In just a couple of months, it will be time for the ~$40 billion Back-to-School season, and as brands vie for the K-12 parents who make those purchases, they owe it to those families to pressure the digital-age media companies to stop killing kids.

Pass the TikTok Legislation. And then…

TikTok legislation

“At what point then is the approach of danger to be expected? I answer, if it ever reach us, it must spring up amongst us. It cannot come from abroad. If destruction be our lot, we must ourselves be its author and finisher. As a nation of freemen, we must live through all time, or die by suicide.” – Abraham Lincoln, The Lyceum Address, 1838 –

Lincoln’s famous observation that only Americans can truly destroy America speaks to the fragility of the Republic, which the founders knew could only endure so long as the people generally keep faith with certain core principles. Watching those principles assaulted by a far-right populism, which has presently swallowed the Republican Party, it is natural to read Lincoln as prophetic, and it is hard to imagine any foreign influence being more dangerous. On the other hand, when Lincoln said, “It cannot come from abroad,” he could hardly have imagined a time when 170 million young Americans would carry a pocket surveillance device loaded with software under the control of a foreign adversary.

Following the 362-55 vote by the House to force TikTok to divest itself of all ties to the Chinese Communist Party (CCP), opinions about the bill question both its necessity and viability—though not with good reason. Although rashly described as a “ban,” the effect of H.R. 7521 would force a sale of the platform by parent company ByteDance to an owner without ties to the CCP. To that end, I agree with independent musician Blake Morgan. who endorses the TikTok legislation, both as a national security and anti-piracy measure. In an editorial for IP Watchdog, Morgan writes:

The vast majority of music on TikTok generates virtually no revenue for the musicians who made it, and even more music on the platform is completely unlicensed (stolen), copied (stolen via AI), or pirated (stolen). Simply put, TikTok is trying to build a music-based business without paying music makers fair value for the music. That’s why Universal Music Group has already pulled out of TikTok. That’s why the National Music Publishers’ Association has already announced it won’t renew its license with the company. So, TikTok poses “a clear and present danger” to American music, too.

The music piracy alone is reason to force the platform to operate within the reach of U.S. law, but with regard to the national security threat, it is notable that unless one is in the intelligence community, or a Member of Congress receiving a security briefing, we are left to rely upon one of those core principles, which have been eroded by social media in general:  trust. I do not endorse the Whatabouist’s view that just because TikTok is not alone in causing havoc that this legislation is moot, but the story does highlight those hazards of social media that make it difficult to convince many Americans that TikTok is a threat of any kind.

Joseph V. Amodio, writing for Tanium, states that TikTok is distinguishable from other platforms thus:

TikTok stands out in its power to manipulate: While videos from any app can go viral, TikTok’s infection ability is unique, given the practice of “heating,”  where TikTok staff can supercharge distribution of hand-picked videos. This has huge implications for fair competition and free trade. Just imagine how they can siphon profits by amplifying your competitors’ posts or cooling down your own viral campaigns.

Whether the goal of data manipulation is to pull the levers on enterprise, as Amodio indicates, or to influence young voters on policy matters, how does one convince nearly 200 million 18 – 29-year-olds that said manipulation is both occurring and should be seen as an attack? If an act of cyberwarfare entails hacking the Pentagon or shutting down part of the power grid, enough Americans can probably recognize such events as attacks in a traditional sense. Likewise, the prospect of malicious software injected into millions of mobile devices might be understood as a threat.

But what if the weapon is an insidious propaganda tool used to manipulate the opinions of millions of citizens? Who is going to be trusted to identify that as a sustained attack on the United States? Some portion of the TikTok demographic will not believe that China (or Russia) is an adversary in the first place, which is arguably evidence itself of social media’s power to influence.

Even if the delivery platform is owned by Meta serving “ads” purchased by foreign operatives with the same objective to sow discord, no individual wants to believe he’s being manipulated. More complexly, even if one tries to apply critical thinking, the effort itself is often countered by teams of data manipulators flooding the zone—i.e. the illusion of more “information” tilting bias in one direction or another. This was true before parties like China and Russia upped their cyber game and before they could add artificial intelligence to the toolset.

As a practical example at the heart of the TikTok story, how does the moderate, who would rather not hyper-politicize national security, take the contemporary Republican seriously in his professed opposition to TikTok’s capacity to “manipulate” Americans? For instance, Rep. Ralph Norman of South Carolina writes, “…if you’ve spent 5 minutes exploring TikTok, you should have recognized the addictive nature of this platform. It is designed for one purpose: to control your attention. Their algorithm quickly figures out what kind of videos you’re likely to watch, and then feed you similar videos to keep you fixated.”

Fine. But one could swap “TikTok for “Trump” and make the same general argument, including that his self-interested rhetoric about NATO, disrespect for the Constitution, etc. all comprise a threat to national security. What would Lincoln say to his legacy party about this tangled interplay between foreign and domestic forces, both hostile to American interests, and both weaponizing disinformation through addictive and manipulative platforms?

In this context, it is important to note that Trumpism is a symptom of populism—a trend that is no less prevalent on the left than on the right, perhaps especially among 18 – 29-year-olds. The difference, for the moment, is that the left has not found its own cult-like figure, who might also undermine core principles, albeit in a different style than Trump. The rise in populism in the U.S. and other democracies is a direct result of social media’s nature to factionalize hearts and minds, which is precisely what a foreign adversary wants to achieve. TikTok may be a shrewdly named time-bomb delivered to over half the U.S. population and, as such, should be diffused. But assuming that task can be accomplished, the existential question remains as to whether we can quarantine the most virulent effects of all social platforms or “die by suicide.”

The Future Was Then: AI Moving Us Backwards on Carbon Emissions

Coal-fired power plant. Cost of data centers.

As the Super Bowl approached and passed, it seemed that one faction of Americans was accusing Taylor Swift of practicing witchcraft on the NFL while another was slagging her for the carbon output of her private jet—reportedly about 8,300 tonnes of CO2e in 2022. And although it is fair to expect owners of private aircraft to fly responsibly, I must ask this:  What is the environmental value of not shitposting about Taylor Swift? Or for that matter, any number of topics?

The carbon cost of a single tweet is ~.026g; the cost of X (nee Twitter) is estimated at 8,200 tonnes per year; and the overall carbon cost of social media is estimated at 262 million tonnes of CO2e per year. So, if we use this social media carbon calculator, it tells us that 1 million people spending just 2 minutes a day on the 10 major social sites costs just over 8,300 tonnes of CO2e per year—roughly the same amount T Swift reportedly generated with her airplane in 2022.

C

I recognize that this is comparing the carbon footprint of one individual to a million individuals, but that one individual entertains millions and generates economic activity. By contrast, the social posts of a million people at any given moment are only making pollution in every sense. Clearly, it costs metric tons of carbon to produce metric tons of useless noise. And that preamble brings us to the topic of the projected increase in electricity demand for data centers to support advancements in artificial intelligence (AI). As Bloomberg reported in late January:

Electricity consumption at US data centers alone is poised to triple from 2022 levels, to as much as 390 terawatt hours by the end of the decade, according to Boston Consulting Group. That’s equal to about 7.5% of the nation’s projected electricity demand. 

In past posts about generative AI, I have opined that we do not need machines to make creative works—because we don’t—and that AI should be tasked with solving problems like curing disease or mitigating the climate crisis. On the second point, however, it seems that if an AI were asked the climate question, its only rational answer would be, “Shut me down.” If nothing else, AI could be an environmental catastrophe in the making.

“In the Kansas City area, a data center along with a factory for electric-vehicle batteries that are under construction will need so much energy the local provider put off plans to close a coal-fired power plant,” the Bloomberg article states. Because that quote cites both electric vehicles (EVs) and the data center, one must acknowledge that the environmental analysis of EVs entails a projection of carbon saved against carbon spent. But because a data center is pure carbon expenditure, that cost can only be measured against the value of the activity the center supports.

No question that data centers are infrastructure. There is no enterprise—private or public—that does not rely on networked computing, and economic activity almost always presents an environmental challenge, whether one is building a railroad or an eCommerce platform. But considering even the current energy demand, let alone the projected increase, AI pulls the issue into focus because so many of its applications are already either useless or toxic.

Useless, as stated, is the AI that generates “creative” work in lieu of the human creator, while toxic would be something like more advanced deepfakes exacerbating the disinformation crisis. Regarding the former, this flips the economic equation—i.e., carbon cost yielding lost jobs, which is arguably the opposite of economic activity. Regarding the latter, the use of AI to expand and deepen disinformation campaigns represents carbon cost in exchange for “better tools” that have already been used to weaken democracy worldwide.

In 2013, I wrote a post called Show Me the Innovation—one of many responses to the generalized argument that legal frameworks designed to protect intellectual property, privacy, information integrity, and even personal safety all stand in the way of “innovation.” The point then, as now, is that not everything produced by Big Tech is “innovative,” if we insist that word mean something. If “innovation” should improve lives and foster prosperity, isn’t it curious that social media’s carbon cost helps support anti-science agendas like climate change denial?

In a recent post about the environmental cost of data centers, Chris Castle cites Science Daily, noting that “generative AI like ChatGPT could cost 564 megawatt-hours (MWh) of electricity a day to run.” That’s more than some small countries. When coupled with the fact that data center demand is halting planned shutdowns of coal-fired plants, then it starts to look a lot like AI is helping to “innovate” the U.S. backwards, reversing the gains made over the past twenty years in carbon emissions.

Traditionally, it is possible to do a cost/benefit analysis. We burn x amount of coal to power y number of homes, or we need x amount of oil to run y amount of ground transportation. And even in the earliest days of electrification or automobiles, the benefits were self-evident. But with rapid advancements in AI, the cost is rising without clear evidence of benefit—at least not at the scale the electricity demand implies. This is because, like so many “innovations” of Big Tech, AI might be used to accomplish something extraordinary like improving medical diagnoses, but in the meantime, it will be used make what is already bad about digital life suck faster.


Photo by: dropthepress