Some blue check “Premium” subscribers on X, formerly Twitter, who are spreading misinformation may be eligible for X’s ads revenue sharing program. That’s the conclusion reached by NewsGuard, a for-profit misinformation watchdog organization, in its report that followed ads appearing on 30 posts from November 13th to the 22nd. The posts made conspiratorial claims about the Israel-Hamas war that reached a collective 92 million views.
Each of the 10 accounts NewsGuard referenced had over 100,000 followers — one of the metrics it uses to classify them as “misinformation super-spreader” posters.
NewsGuard’s VP of communications, Veena McCoole, told The Verge in an email that the 30 posts from the report “advanced some of the most egregious false or misleading claims about the Israel-Hamas war, which NewsGuard had previously debunked in its Misinformation Fingerprints database of the most significant false and misleading claims spreading online.” She added that “it’s fair to assume” that the posts aren’t the only ones “advancing misinformation or hate speech” that may qualify for revenue sharing on X.
A post falsely claiming an image is AI-generated has a Pizza Hut ad below it.
NewsGuard’s findings echo other recent reports questioning whether X is putting ads on hate speech or false claims. Earlier this week, X sued Media Matters over a report that showed major advertisers’ content being displayed under pro-Nazi content, prompting large companies like Apple and Disney to pull advertising and stop posting on X. It also noted X owner Elon Musk had replied in support of an antisemitic post. A September report from the Center for Countering Digital Hate (CCDH) detailed hate speech X wasn’t removing.
On Tuesday, an official X account posted that NewsGuard would be publishing the report and suggested the company would only share its research data “when you pay.” A spreadsheet containing the data — that is, the posts that were studied — was linked in the report. When reached out to for comment, X’s press line responded with an email auto-reply: “Busy now, please check back later.”
In October, Musk wrote that the platform would not share revenue with posts that have corrections from Community Notes, its crowd-sourced moderation tool. X’s revenue-sharing terms says that it won’t give payouts when it finds “activity that we believe were due to any breach of the X User Agreement.”
The “X User Agreement” mentioned there, at times, refers to a single document — the X terms of service — or a constellation of rules spread across several different pages, none of which appears to mention the Community Notes exclusion. Other content it lists as violations includes “misleading media” that’s “deceptively altered, manipulated, or fabricated” and any content that’s “shared in a deceptive manner or with false context.”
NewsGuard’s analysts documented 30 posts with “false or egregiously misleading claims about the Israel-Hamas war.” The report said 24 of those had “200 ads from 86 major brands, nonprofits, educational institutions, and governments” below them. And 15 of the 30 had Community Notes appended to them, which should make them ineligible, according to Musk’s post about revenue. But the other 15 had no such notes, and finally 14 of those had ads for “70 unique major organizations.”
NewsGuard wrote that ads for companies like Oracle, Pizza Hut, and Anker appeared under the posts that claimed things like that Hamas’ October 7th attacks were “false flags” or that conservative podcaster Ben Shapiro shared an AI-generated picture of a child killed by Hamas. As of this writing, neither of those linked posts has a Community Note appended and the pair have been viewed 1.1 million and 22.4 million times, respectively.
Ads from Airbnb and Asus also sat under misleading posts. Others had ads beneath them for governmental organizations like the FBI and Taiwan’s Ministry of Culture. Nonprofits like the UK Royal Society of Chemistry and the University of Baltimore were reportedly in the mix, too.
Update November 24th, 2023, 7:16PM ET: Removed a sentence saying we’d asked NewsGuard a question about its study, as its response was already included.