Mike Benz
&
ALLUM BOKHARI
Resize text-+= |
Mike Benz | Deconstructing the Censorship Industrial Complex
Nov 14, 2023 #bitcoin #podcast #freedomofspeech
Marty sits down with Mike Benz, the executive director of the Foundation for Freedom Online to discuss digital censorship, corporate control, prosecution of uncooperative politicians, and more. Mike on Twitter: / mikebenzcyber FFO website: https://foundationforfreedomonline.com/
About the Author(s)Mike Benz is the Director of the Foundation for Freedom Online. He is a former State Department official with responsibilities in formulating and negotiating US foreign policy on international communications and information technology matters.
BELOW: Typical sneaky smear item on Mike Benz produced and circulated by the censorship industry (in this case NBC News) to denigrate its enemies.
APPENDIX 1 — with Allum Bokhari
The Advertising Industry’s Deepening Role in Online Censorship
- by Allum Bokhari
SUMMARY
- The advertising industry plays a pivotal role in online censorship through collective boycotts of platforms and websites that host disfavored content.
- Two organizations that led the way on boycotting and financial blacklisting, Sleeping Giants and NewsGuard, emerged directly from the advertising industry.
- Brand and advertising agencies typically use the pretext of “brand safety” for boycott campaigns, the most recent being the mass exodus from Elon Musk’s X.
- Recently, prominent players in the advertising industry have sought to move beyond this pretext, dispensing with the need to demonstrate potential harm to a brand before blacklisting disfavored content.
In the arsenal of the censorship-industrial complex, few weapons have been more effective than advertiser boycotts. Long before online censorship reached its peak in 2020 and 2021, advocates of online censorship had identified online advertisers as the most important source of pressure on social media companies to restrict free speech. When direct appeals to social media platforms fail, pro-censorship campaigners use the threat of advertiser boycotts to produce the desired result.
This playbook has been effective. Since 2016, the advertising industry has repeatedly engaged in boycotts against platforms in a bid to restrict online free speech to a narrow set of “brand safe” viewpoints. Three major incidents stand out, each one targeted at a major online platform:
- The 2017 YouTube ‘Adpocalypse’: coordinated media coverage of ads appearing next to extremist and “hateful” videos on YouTube led to an advertising boycott that cost the company millions. YouTube responded by implementing a draconian system of demonetizing anything remotely controversial, decimating the revenue streams of independent creators and establishing a content moderation regime that persists to this day.
- The 2020 Facebook “disinformation” boycott:driven by panicked media coverage about Facebook’s alleged failure to sufficiently censor then-president Donald Trump and his supporters, advertisers fled Facebook at the height of the 2020 presidential election campaign,
- The 2022-23 X boycott: calls for advertisers to financially throttle X, formerly Twitterm began before Elon Musk even completed his acquisition. Upon his takeover, the advertising industry quickly complied. Within a year, X’s advertising revenue had been cut in half.
On top of this, two of the most potent forces of financial blacklisting emerged directly from the advertising industry:
- NewsGuard: a private company that financially throttles alternative media by selling blacklists of disfavored news sourcesto advertisers, NewsGuard is a creation of the advertising industry. Its 2018 seed round was led by Publicis Groupe, one of the “big six” ad agency holding companies. It counts two other Big Six companies, IPG and Omnicom, as clients.
- Sleeping Giants: founded by far-left advertising professionals, Sleeping Giants was a campaign to financially throttle alternative news sources, by spreading smears about its targets to any brand that advertised with them. It succeeded in triggering the flight of advertisers from conservative news sites. Its alumni continue to use the same playbook against non-establishment media.
Finally, there are a litany of examples of major players in the advertising industry independently pushing censorship. “Big six” agencies and their subsidiaries regularly engage in pro-censorship activities. Examples include IPG’s principles for responsible media and content, which preclude advertising next to “hate speech,” WPP subsidiary GroupM’s decision to join the “conscious advertising network” which promotes similar principles, and Omnicom and IPG’s decision to offer their clients access to NewsGuard blacklists.
The Global Alliance for Responsible Media (GARM)
While advertiser boycotts are often preceded by media and NGO-led pressure campaigns, the advertising industry has also acted on its own initiative to promote internet censorship.
In 2019, the World Federation of Advertisers, a body that represents roughly 90 percent of global advertising spend (almost one trillion dollars), moved to institutionalize the advertising industry’s new role as the global internet police by establishing the Global Alliance for Responsible Media (GARM).
GARM’s mission is to establish a shared definition of “harmful content” across the entire ad industry, so that advertisers can collectively blacklist disfavored sources of content. It is, in effect, a declaration of permanent boycott against platforms that are too free speech-friendly.
Even before GARM’s establishment, the WFA was attempting to influence content on social media. In 2020, after major advertisers boycotted Facebook, Facebook, Twitter, and YouTube reached an agreement with WFA to arrive at “common definitions” of harmful content, further agreeing to have some of their processes reviewed by external auditors.
GARM’s “brand safety” floor was the next evolution of the ad industry’s plan to standardize speech restrictions across social media. In its most recent version, GARM’s shared framework includes the most common pretexts for censoring political speech: misinformation, hate speech, content that “shocks, offends, and insults,” and even the improper discussion of sensitive social issues:
If GARM’s “brand safety floor” were adopted across social media platforms, it would ensure online speech is nowhere close to the ideal of the First Amendment.
According to internal communications obtained by the House Judiciary Committee, this is unlikely to be of any concern to GARM, whose co-founder is on record complaining about the “extreme global interpretation of the US constitution,” and the use of “‘principles for governance’ and applying them as literal law from 230 years ago (made by white
men exclusively).”
GARM has different standards for content, and makes no secret of the fact that it wants its “brand safety floor” to be adopted by any platform that wants to receive its members’ ad revenue.
- Platforms will adopt, operationalize and continue to enforce monetization policies with a clear mapping to GARM brand suitability framework
- Platforms will leverage their community standards and monetization policies to uphold the GARM brand safety floor
- Advertising technology providers will adopt and integrate GARM definitions into targeting and reportingservices via clear mapping or overt integration
- Agencies will leverage the framework to guide how they invest with platforms at the agency-wide level and at the individual campaign level
- Marketers will use the definitions to set brand risk and suitability standards for corporate, brand and campaign levels.
At a recent hearing of the House Judiciary Committee, Christian Juhl, CEO of WPP subsidiary and GARM member GroupM, explained that the whole purpose of GARM is to create a one-size-fits-all definition of “harmful content” for platforms.
“Brand suitability is particular to each brand what is unsuitable to one may be suitable to another. But all brands generally agree they do not want to appear next to illegal or harmful content. Many also seek to avoid as ad placements near content that while not illegal does not align with their values.
With the increasing focus on brand suitability Brands wanted to better understand how Publishers were identifying prohibiting and removing harmful content. What they found that was at every platform took a different approach. Definitions of harmful content also varied. Without consistent standards, companies were concerned their ads would end up appearing in unsuitable environments. We believed that consistent standards were needed to help our clients connect with consumers which is why we and other organizations came together to establish the Global Alliance [for] Responsible Media or GARM. GARM developed standard definitions of content that brands might consider unsuitable so that advertisers and publishers could speak a common language about sensitive content”
Later in the hearing, Juhl sums up GARM’s mission as “making order of something that had no order to begin with.”
Given that every major platform (with the exception of Substack) counts advertising as a major source of revenue, these shared standards for advertisers are, in effect, shared standards for all of social media.
As shown in the House Judiciary Committee’s report, platforms and publishers that deviate from GARM’s standards (or whose viewpoints GARM members simply don’t like) are met with swift retribution, even if ads from GARM brands don’t directly appear next to content that the cartel objects to.
Here are three major examples of GARM’s efforts to control online content, from the report:
- The Twitter/X boycott: after a collective boycott of X due to Elon Musk’s relaxation content moderation regime, GARM members bragged “taking on Elon Musk” and taking X “80 percent below revenue forecasts.”
- Blacklisting non-establishment media: GARM members “closely watched” disfavored news outlets to find a pretext for withdrawing ad dollars, with Breitbart News and The Daily Wire specifically named. GARM also collaborated with NewsGuard, a private company, and the Global Disinformation Index (GDI), a British nonprofit. The primary purpose of both these organizations is to build blacklists of disfavored news sources.
- Threatening Spotify: members of GARM’s steer team placed sustained pressure on Spotify over its support for Joe Rogan, then the world’s number-one podcast host, for hosting guests skeptical of official COVID-19 policies. Members of GARM’s steer team also advised Coca-Cola, a major global brand, that Rogan and Spotify were a “major area of concern.”
Despite the fact that Rogan’s content was not juxtaposed with any major GARM brand, representatives of the ad cartel still got involved, telling Spotify that it had a “disregard for spreading dangerous misinformation” and that the platform’s support for Rogan “process of joining GARM.”
Beyond “Brand Safety”
At the same time, another email shows GARM co-founder Rob Rakowitz admitting that “brand safety is somewhat separate” from their concerns with Spotify because “brands aren’t being slotted into [The Joe Rogan Experience] by accident per se.”
This raises questions about the “brand safety” argument: the industry’s go-to, politically neutral pretext for boycotting disfavored platforms.
According to the “brand safety” argument, decisions to withdraw ads from social media platforms or news websites are driven by the fear of juxtaposing a client’s ad with controversial content that divides the public. This keeps the brand safe from any consumer backlash or negative press caused by that content — so goes the argument.
Yet, as Rakowitz admits in his disclosed email, even if we accept the argument that brands might have been damaged by running ads next to The Joe Rogan Experience (then the most popular podcast in the world) there was no such risk, since no such ads were being run. Yet GARM pressured Spotify anyway.
This reflects the advertising industry’s gung-ho attitude to pushing censorship on social media platforms in the last decade, which included a desire to move beyond the “brand safety” principle.
In 2020, Interpublic Group (IPG), one of the world’s “big six” advertising holding companies and a GARM member, argued that brands should think about not just their own “safety” but the more expansive commitment of “responsibility.”
Brand responsibility, explained IPG in an announcement, shifts focus from protecting brands to protecting “the communities that a brand serves, weighing the societal impact of the content, the publishers and services, and the platforms being funded by advertising.”
IPG’s “media responsibility principles” justify financial throttling for a litany of pretexts that are typically used to curb political speech and blacklist non-establishment media. Under IPG’s principles, ad revenue can be withdrawn from any site or platform that “creates hostile conversation environments,” “spreads misinformation,” or “fuels hatred on the grounds of race, religion, nationality, migration status, sexuality, gender or gender identity, disability or any other group characteristic.”
By shifting the justification from from “brand safety” to “brand responsibility,” IPG sidesteps the only limiting principle on advertiser blacklisting: the need to prove that disfavored content is somehow a risk to brands.
IPG’s own press release revealed the vagueness of its new principles: anything that has a negative “societal impact” or “contributes to harm” is deemed fair game for demonetization.
IPG’s “media responsibility” initiatives are housed in its ESG division, IPG ESG, which includes broad commitments to fighting “hate speech” and promoting diversity.
In a 2020 article, Christian Juhl, CEO of WPP subsidiary GroupM, the largest media buying company in the world and a member of GARM’s steer team, also wrote about the need to go beyond brand safety.
While praising the 2020 boycott of Facebook, Juhl argued that advertiser action driven by “brand safety” concerns hasn’t gone far enough.
So far, advertiser efforts to address the situation have proved about as effective as those “brand safety” protocols. The #StopHateforProfit boycott of Facebook this summer reportedly drained the platform of millions of dollars in revenue. But the effort fizzled once advertisers realized the boycott was doing more damage to their own bottom line that it was to Facebook’s.
As a solution, Juhl proposed the concept of “socially conscious media buying,” inspired by ESG funds, which take the “ethical and moral consequences” of ad spending into an account.
In addition to cost-per-impression, we need to be measuring cost-per-social contribution. We need to start factoring a media placement’s carbon footprint into our ad pricing. We need to support publishers that reach more diverse audiences, even if those publishers don’t yet provide the level of audience data we’ve grown accustomed to. And we need to do this with more than the usual 10% experimental budget.
This means the introduction of new tools that empower marketers to consider these sorts of ethical and moral consequences when buying media. Just as ESG investing and sustainable funds have become billion-dollar businesses on Wall Street, we believe that socially conscious media buying will find an enthusiastic audience among advertisers.
“Brand safety” is already a concept that requires guesswork, and can easily be skewed by ideological and political biases. “Ethical and moral consequences” is an even more subjective measure.
Rakowitz’s actions with regards to Spotify, IPG’s “media responsibility principles,” and Juhl’s “socially conscious media buying” all point to the same conclusion: the advertising industry wants to exploit its control of the purse-strings of ad revenue to influence content around the web.
And it considers that cause to be too important – indeed, to have too many “ethnical and moral consequences” – to be restricted by the limiting principle of brand safety.
How Advertisers Censor Content - A Talk By Allum Bokhari
Learn more about FUTO at https://futo.org/ Foundation for Freedom Online: https://foundationforfreedomonline.com/ Filmed and edited by @TheKinoCorner
APPENDIX 3
RFK Says He’ll Show Trump How To Drain Swamp! w/ Mike Benz
Sep 12, 2024 #TheJimmyDoreShow
According to RFK Jr., Donald Trump had every intention of “draining the swamp” during his first administration, but was overwhelmed by the task at hand and wound up relying on the same corporate insiders as previous administrations to staff up, undermining any chance of swamp draining at the outset. But this time, with his help, will be different, RFK says, since he knows exactly how the regulatory agencies work and how to take them down from the inside. Jimmy speaks with the Foundation for Freedom Online Executive Director Mike Benz about whether RFK is right and has any chance of success at such a revolutionary project.
Follow Mike Benz on Twitter: / mikebenz
APPENDIX 4
Mike Benz | The Whole-of-Society Censorship Framework | NatCon 4
Mike Benz's address at the National Conservatism Conference in Washington, DC on July 9, 2024.
- In cynicism and power, the US propaganda machine easily surpasses Orwells Ministry of Truth.
- Now the fight against anti-semitism is being weaponised as a new sanctimonious McCarthyism.
- Unless opposed, neither justice nor our Constitutional right to Free Speech will survive this assault.
Print this article
The views expressed herein are solely those of the author and may or may not reflect those of The Greanville Post.
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License •
ALL CAPTIONS AND PULL QUOTES BY THE EDITORS NOT THE AUTHORS
A largely too technical an article, but yes, how can one otherwise form an anaesthetized public as exists in the US and is spreading fast to Europe. With replacement of thought by cliché and platitudes one catches the will and attention of a large public vide the present candidateship of Ms Harris.