D.C. Event Shines Light on Advertisers Supporting Social Media Harm to Children

social media

When I was a kid in the 1970s and my father was a principal in an ad agency, they had the Ameritone paint account, and I remember him explaining that they were not allowed to show paint and food together in a commercial lest a child viewer be confused into thinking that paint might be edible. By contrast, a social media platform today is free to conflate child-focused material with illegal drug offers and numerous other conduits leading to serious harm or death. And it’s all swept under the rug of innovation and commerce.

Algorithms kill kids. Let’s just call it like it is at this point and stop pussyfooting around the rhetoric that social media platforms are neutral platforms for “information.” Never mind that information itself is almost a lost cause on social media, but the effects of algorithmic manipulation—even simple recommendations—can have disastrous effects for children and teens, including depression, anxiety, suicide, and accidental death. And that was before AI.

As reported last September, the accidental suicide of Nylah Anderson, age 10, was the result of TikTok’s algorithm prompting her to try the “blackout challenge,” which entails making a “game” of self-asphyxiation. In the case against TikTok for its role in leading Anderson toward the “blackout challenge,” the Third Circuit Court of Appeals articulated one of the few rational reads of the Section 230 liability shield. The court stated:

TikTok reads § 230…to permit casual indifference to the death of a ten-year-old girl. It is a position that has become popular among a host of purveyors of pornography, self-mutilation, and exploitation, one that smuggles constitutional conceptions of a “free trade in ideas” into a digital “cauldron of illicit loves” that leap and boil with no oversight, no accountability, no remedy.

Brought to You by Your Favorite Brands

Add to that cauldron the major brands whose advertising dollars unconditionally support social platforms, and that was the focus of this morning’s event held at the National Press Club. “We saw a great turnout,” says cyber-analyst Eric Feinberg, who has been engaged on ad-supported toxic social media content since 2013. More than 40 attendees filled the 40-seat room for the kick-off event designed to focus the attention of major brands on the fact that their ad dollars finance platform operations that cause serious harm and death to children and teens.

The event was organized and hosted by parents who have been working to turn personal tragedy into social change through both public policy and private action. For instance, one mother who spoke was Debra Schmill, who started the Becca Schmill Foundation after losing her daughter Rebecca to fentanyl poisoning from pills obtained with the “help” of social media. Becca’s death was the culmination in a cascade of terrible events intersecting social platforms—beginning with a rape at the age of 15 that was followed by cyber-bullying and the consequent battle with depression that led to the fatal pills obtained online. Deb Schmill is one of many parents determined to prevent other children and families from suffering similar fates.

“Women make 70% to 80% of all purchasing decisions,” Feinberg explained to me by phone after the event, “and these mothers who spoke today recognize that mothers just like them are funding social media harm to their own children.” Posting his daily mantra that “Brands are buying while kids are dying,” Feinberg has recently taken swings at McDonalds for its crossover promotion with Snapchat…

He makes a solid point. If a major brand overtly promoted the opportunity for kids to get closer to the local drug dealer, pimp, or sexual predator, parents would be outraged. But because social media is an insidious free-for-all, inhabited by good and bad actors, the worst vices are either overlooked or accepted as the cost of obtaining the virtues. But this is a false choice. Multiple defectors from these companies have made clear that the platforms bend their own rules and tweak their algorithms to promote anything that drives “engagement,” without regard to the consequences. And they assume the mainstream advertisers will keep paying without condition because they own all that engagement.

But as Meta whistleblower Sarah Wynn-Williams describes in her book Careless People, that company made an affirmative decision to target known teenage psychological vulnerabilities (e.g., body image) to promote certain products. This abuse of the technology is already unethical—a far cry from not showing paint and food on the same screen—and advertisers who knowingly exploit the “opportunity” should be held accountable by consumers. Meanwhile, as the organizers of today’s event strive to emphasize, that same algorithm exploiting the teen’s vulnerabilities will just as readily push dangerous drugs toward the child as promote a makeup product or gym membership.

By my lights, asking the advertisers to partner with their own consumers—the parents who buy their products—to pressure the platforms to adopt better practices is the very least they can do. In just a couple of months, it will be time for the ~$40 billion Back-to-School season, and as brands vie for the K-12 parents who make those purchases, they owe it to those families to pressure the digital-age media companies to stop killing kids.

Advertisers Demand the Web Get Better in 2017

Photo by sorsillo

In January, Proctor & Gamble’s Chief Brand Officer Marc Pritchard put the digital advertising world on notice that his company will no longer tolerate the waste or opaqueness of the advertising ecosystem. “We’ve been giving a pass to the new media in the spirit of learning,” Pritchard stated in his keynote address to the International Advertising Bureau (IAB).  “We’ve come to our senses. We realize there is no sustainable advantage in a complicated, nontransparent, inefficient and fraudulent media supply chain.”  With over $7 billion in online spending, P&G is the largest among U.S. advertisers; and where they lead, the rest of the industry is likely to follow.

I wrote in December 2015 about a report published by the IAB, which revealed that significant flaws in the digital advertising supply chain—invalid traffic, infringed content, and malware—were costing advertisers just over $8 billion/year in waste. That represented about 16% of global, digital ad purchases.  Although ad spending has continued to grow since that report was written, if Pritchard’s address represents the mood of advertisers, they’re frustrated with two things above all:  the inability to control where their ads appear, and the lack of consistency and transparency in reporting by the major platforms.

In response, Pritchard has laid out the new demands P&G will be making of its advertising partners for 2017, including third-party measurement of metrics (rather than self-reporting by the platforms) and an insistence that all partners become TAG-certified.  It was in February of 2015 that the Trustworthy Accountability Group (TAG) launched this industry-led, voluntary initiative to separate the quality, legal sites from the garbage of the internet. TAG was seen by copyright holders as a major step forward because the initiative sought, among other things, to keep brand advertising off the large-scale piracy sites.

The Ad Exchange is Too Opaque

The underlying problem for advertisers is the automated exchange in which ad impressions are purchased from the available supply—a system that provides advertisers with limited control over where their ads appear and no standardized reporting on the return received for their investments.  An advertiser buys, say, a million impressions, and when those impressions are reached, the advertiser buys another million impressions; but there is very little insight into the nature of those impressions.  It’s a process Prichard calls “murky at best and fraudulent at worst.”

The recent “discovery” of Fake News illustrates the problem.  Appropriately used, the term fake news refers to hucksters who figured out that they can make up to several thousand dollars a month just by inventing provocative, click-bait headlines that draw traffic to sites that have nothing to do with actual news.  These site owners do considerable harm to the world while siphoning value from advertisers who would not otherwise choose to feature their brands among this kind of junk content.

Recently, the News Media Association (NMA) of the UK asked the British government to investigate the impact that Google and Facebook are having on legitimate news by supporting fake news with their “murky” advertising platforms; and the NMA also cites what appears to be a growing problem of ads supporting terrorist propaganda. As I’ve reported in the past, the lack of control in these ad exchanges is why major brand commercials end up on sites like YouTube alongside ISIS recruiting videos or other violent-extremist propaganda.

Brooke Singman for Fox News, notes that year’s Hyundai Super Bowl spot, which pays tribute to U.S. troops serving overseas—and which cost Hyundai about $5 million to run on TV—appeared on one of YouTube’s terror-linked channels. And while YouTube’s official statements express a zero-tolerance policy for terror-supporting accounts and videos, the problem persists while parent company Google remains typically unclear about its ability to remove targeted content or accounts.

With over 300 videos uploaded to YouTube every minute, I don’t think anyone doubts the scope of the challenge; but it is certainly true that the copyright holders, for instance, often see Google as magically omniscient where its own interests are at stake and then mortally fallible in the service of others’ interests. So, I imagine if P&G and other advertisers are truly drawing lines in the sand this year, Google may suddenly discover an extraordinary capacity to weed out terrorist, criminal, and other undesirable content from the YouTube platform.

In fact, Eric Feinberg, CEO of GIPEC says that he can very quickly identify and organize questionable content on major platforms with the system his company has developed for scanning hashtags in multiple languages. “Because our technology can anticipate key communications strands and images being used by terrorist and hate-speech groups, the system can block, quarantine, and sandbox this kind of content for review before it’s published, thus reducing the chance that ads will appear before or next to undesirable content.”

Where this issue overlaps with security, it is possible that the major social media platforms will begin to feel more pressure from the government to stop profiting—however inadvertently—from terrorist propaganda on their sites.  Depending on what form that takes, we are likely to see some civil-libertarian backlash to these policies and also to expect reality to get lost in the rhetoric on all sides.  But for sure money talks. And if the advertisers are demanding that “new media” start to clean house and provide some of the accountability and quality they’re used to from “old media,” my guess is they’re going to get what they want or find other ways to spend their $200 billion.