A few weeks after defeating Elon Musk’s attempt to silence it in court, anti-hate research nonprofit, the Center for Countering Digital Hate (CCDH), is back with a new piece of research into Musk’s social media platform X (formerly Twitter). The study builds on earlier work investigating his impact on online speech by spotlighting how policy changes Musk enacted are actively rewarding hate speech posters with increased reach, engagement and even direct payouts through X’s subscriber feature.
The latest CCDH research takes the form of a case study looking at growth rates for ten influential accounts which are paying for X Premium and have posted anti-Jewish and/or anti-Muslim hate speech since the Israel-Gaza conflict sparked by Hamas’ attack on Israel on October 7, 2023. Some of the accounts pivoted to war hate posts after previously posting COVID-19-related conspiracy theory content, per the report.
The ten accounts tracked for the study — which is entitled ‘Hate Pays: How X accounts are exploiting the Israel-Gaza conflict to grow and profit’ — are: Jackson Hinkle; Dr. Anastasia Maria Loupis; Censored Men; Jake Shields; Dr. Eli David; Radio Genoa; Ryan Dawson; Keith Woods; Way of the World; and Sam Parker.
The CCDH found these accounts were able to boost their reach on X after posting hateful content targeting the war. The report discusses examples of hate speech posted by the accounts, such as tweets depicting antisemitic tropes like the blood libel or seeking to dehumanize Palestinians by depicting them as rats.
“Each of the accounts showed slow follower growth in the four months before October 7th, for a combined growth of approximately 1 million followers. However, in the four months after the outbreak of the conflict, they collectively gained 4 million new followers,” the CCDH wrote, saying it represents nearly 4x growth collectively vs the four months before the war.
Growth rates for individual accounts gaining new followers over the period varied, with the highest growth multiple recorded being 9.6 (for Dawson’s account), followed by 8.3 (for Hinkle), and 7.1 (for Parker). At the lower end, Way of the World grew its followers 1.7x over the period.
The report includes a potted history of the tracked accounts’ notoriety, noting for example that Hinkle is banned by WhatsApp, YouTube and PayPal. Or that the Censored Men (anonymous) account used to generally post in defence of toxic masculinity influencer, Andrew Tate — but, since October 7, has focused on the Israel-Gaza conflict. While Dawson, a Holocaust denier who also believes the 9/11 terrorist attacks were carried out by Israel, was previously banned from X but had his account reinstated in 2023 under Musk.
Since taking over Twitter, as X still was back in October 2022, the billionaire has reversed a number of legacy account bans, including welcoming back notorious white supremacists and neo nazis. Coupled with policy changes Musk has pushed in areas like content moderation, account verification and premium features (such as prioritized ranking for paid accounts’ posts), the upshot is a polarizing speech platform where it’s increasingly difficult to distinguish genuine information from lies, and where the tone of posts all-too-often skew towards conversational outrage (or worse).
The CCDH contends this is intentional; a deliberate strategy by Musk to profit from tragedy. It’s accusing him of embracing hateful accounts and configuring X so that purveyors of hate speech are able and encouraged to turn war and human suffering into an opportunity to raise their profile on the service and earn revenue from posts that exploit violence and misery.
Six of the 10 accounts it looked at have enabled X’s subscriptions feature, meaning their followers can pay them to access additional content. The report also records a post by Hinkle in early October when he shared a screenshot in which he appeared to have made $550 in ad revenue over the course of a month — directly profiting from engagement driven by hateful posts.
In another finding, the CCDH said its analysis of the accounts showed that even critical resharing — such as quote tweets denouncing hateful content — raised their visibility and reach (potentially boosting revenue-generating opportunities). Such critical reshares contributed as much as 28% to the reach of hateful posts, per the report, which suggested the figure is a conservative estimate as it does not take account of X’s own algorithmic response to these reshares, which applies further amplification aimed at harvesting even more engagement for ad profit.
Ad-funded business models that earn revenue based on user engagement ultimately drive this anti-social outrage mechanism. In X’s case, Musk’s erratic behavior has alienated some advertisers. But not all: The CCDH found ads being served alongside hateful posts made by all the tracked accounts. “We found ads for Oreos, the NBA, the FBI and even X itself placed near hateful posts,” it wrote.
“Under Elon Musk’s ownership, X appears to be pursuing a strategy of hosting as much controversial content as possible,” a CCDH spokesperson told TechCrunch, responding to questions about the research. “We know that this controversial content is addictive, not just for users who approve of it but also for users who criticize it too. The potential benefit to X is that these controversies could ramp up user time spent on the platform and increase ad revenue — but only if brands are willing to pay for ads that could be displayed near toxic content.”
“The accounts studied by our report have grown sharply despite posting false or hateful content, showing that posting such content is no impediment to growth on X. This is not unique to the Israel-Gaza conflict but it is the latest example of the problem. Our previous research into accounts that were reinstated following Musk’s takeover of Twitter shows that X stands to make significant ad revenue by welcoming users posting a range of topical hate and disinformation, from brutal misogyny to anti-vaccine conspiracies.”
Commenting on the report in a statement, Imran Ahmed, CEO and founder of the CCDH, added: “The public and advertisers need to know more about the symbiotic, profitable relationship between X and hate-peddling ‘influencers’. Lawmakers must act to enforce greater transparency and accountability from platforms and to allow these companies to be held responsible for harming the civil rights and safety of Jews, Muslims and other minority communities.”
Musk has previously claimed hate speech has decreased on his watch but earlier CCDH research debunked his claim.
X is also currently under investigation in the European Union for a string of suspected breaches of the bloc’s online governance and content moderation regime, including over its response to illegal content — which may include hate speech. Penalties for confirmed breaches of the EU’s Digital Services Act can reach 6% of global annual turnover.