Ars Technica

X sues hate speech researchers whose “scare campaign” spooked Twitter advertisers

View non-AMP version at arstechnica.com

X sues hate speech researchers whose “scare campaign” spooked Twitter advertisers

As Twitter continues its rebrand as X, it looks like Elon Musk hopes to quash any claims that the platform under its new name is allowing rampant hate speech to fester. Yesterday, X Corp sued a nonprofit, the Center for Countering Digital Hate (CCDH), for allegedly "actively working to assert false and misleading claims" regarding spiking levels of hate speech on X and successfully "encouraging advertisers to pause investment on the platform," Twitter's blog said.

In its complaint, X Corp. claims that CCDH's reports have caused an estimated tens of millions in advertising revenue loss. The company said it's aware of "at least eight" specific organizations, including large, multinational corporations, that "immediately paused their advertising spend on X based on CCDH’s reports and articles." X also claimed that "at least five" companies "paused their plans for future advertising spend" and three companies decided not to reactivate campaigns, all allegedly basing decisions to stop spending due to CCDH's reporting.

X is alleging that CCDH is being secretly funded by foreign governments and X competitors to lob this attack on the platform, as well as claiming that CCDH is actively working to censor opposing viewpoints on the platform. Here, X is echoing statements of US Senator Josh Hawley (R-Mo.), who accused the CCDH of being a "foreign dark money group" in 2021—following a CCDH report on 12 social media accounts responsible for 65 percent of COVID-19 vaccine misinformation, Fox Business reported.

"This is the same dark money group that tried to have the conservative @FDRLST deplatformed last year. And they’ve gone after other conservative sites as well, like @BreitbartNews," Hawley said. "But who is funding this overseas dark money group—Big Tech? Billionaire activists? Foreign governments? We have no idea. Americans deserve to know what foreign interests are attempting to influence American democracy."

The CCDH's website says that it's funded by "philanthropic trusts and members of the public." A website dedicated to tracking funding sources of progressive organizations, InfluenceWatch, reported that the CCDH, which has offices in the US and the United Kingdom, has ties to the left-wing British Labour Party.

CCDH founder and CEO Imran Ahmed claims that Musk is attempting to censor the CCDH. Ahmed said in a statement provided to Ars that "Elon Musk’s latest legal threat is straight out of the authoritarian playbook."

"He is now showing he will stop at nothing to silence anyone who criticizes him for his own decisions and actions," Ahmed said.

Reuters reported that CCDH's lawyers said X's allegations had no legal basis and accused X of "intimidating those who have the courage to advocate against incitement, hate speech, and harmful content online."

X's lawyer, J. Jonathan Hawk, did not immediately respond to Ars' request for comment.

Disputing the data

In a blog, X accused the CCDH of "actively working to prevent free expression" by allegedly gaining unauthorized access to X data that was then allegedly taken out of context in attempts to paint X as a platform overwhelmed by hate speech and misinformation. The CCDH's goal, X claimed, was to silence or de-platform certain X users and deprive X of revenue.

The lawsuit was triggered by a particular report that the CCDH published in June, finding that "Twitter fails to act on 99 percent of hate posted by Twitter Blue subscribers." As media outlets reported on the CCDH's findings, X CEO Linda Yaccarino tweeted to debunk the CCDH report, claiming that it relied on "a collection of incorrect, misleading, and outdated metrics."

X's complaint now goes further to debunk the CCDH report and many others, saying that "CCDH prepares its 'research' reports and articles using flawed methodologies to advance incorrect, misleading narratives. CCDH’s methodologies use, for example, inappropriately small and cherry-picked, non-randomized data samples that focus on only the social media accounts of organizations and people expressing viewpoints contrary to CCDH’s own views."

In a blog, X described "several ways in which the CCDH is actively working to prevent free expression," including allegedly targeting users who "speak about issues the CCDH doesn’t agree with" and attempting to de-platform those users. X also claimed that the CCDH's reporting harms free-speech organizations more broadly by hurting X's profits and ultimately endangering organizations' access to X's free services.

The tension here ultimately seems to spring from the CCDH's mission, which is to advocate for less hate speech and misinformation on platforms. For example, X's complaint cites CCDH reports calling for anti-vaxxers and climate deniers to be de-platformed in support of its claims that CCDH is advocating for broad censorship.

X argues that the "CCDH seeks to prevent public dialogue and the public’s access to free expression in favor of an ideological echo chamber that conforms to CCDH’s favored viewpoints." And X claims that the CCDH "cherry-picks" data to do this, allegedly ignoring how many impressions that "hundreds of millions of posts made each day on X" receive and instead looking only at the total number of posts including hate speech or misinformation to allegedly "falsely claim it had statistical support showing the platform is overwhelmed with harmful content."

Had the CCDH instead considered impressions on posts including hate speech, for example, X's blog claims that the data would have shown that "today, more than 99.99 percent of post impressions are healthy."

X is not the only one disputing the CCDH's data. In several reports, the CCDH cites Brandwatch as a data source. Brandwatch has since tweeted that CCDH's report on Twitter Blue subscribers "relied on incomplete and outdated data" and "contained metrics used out of context to make unsubstantiated assertions about Twitter."

Unauthorized access to data

X's blog accused the CCDH of gaining unauthorized access to data to implement a "scare campaign" and put "ongoing pressure on brands to prevent the public’s access to free expression."

According to X's complaint, the CCDH allegedly illegally obtained data in two ways. First, it scraped the X platform, violating X's terms of service. Second, the CCDH allegedly "induced one of Brandwatch's customers" to share login information so that the CCDH could access X data—which X has since described as "limited, selective, and incomplete"—to support the CCDH's research.

Brandwatch provides a service that enables its customers to "analyze posts and X/Twitter users," X's complaint said. Customers who enter into licensing agreements with Brandwatch can monitor this data to do things like identify influencers or analyze certain topics or sentiments. Back in April, X updated its contract with Brandwatch to prohibit its customers from sharing data with third parties, putting Brandwatch on the hook as jointly liable for any violations.

According to X's complaint, the CCDH was never a Brandwatch customer and never had authorization to access Brandwatch data that was directly cited in several CCDH reports. X claims that the CCDH accessing this data caused X additional revenue losses "in excess of tens of thousands of dollars," which allegedly "will continue to increase."

X has demanded a jury trial and hopes the US district court in Northern California will order the CCDH to compensate X for damages and be permanently prevented from accessing or using Brandwatch data.

Ars could not immediately reach Brandwatch for comment.

The CCDH has denied X's allegations and claimed that Musk is misleading the public about hate spreading on the X platform. Ahmed said that the CCDH's "research shows that hate and disinformation is spreading like wildfire on the platform under Musk’s ownership and this lawsuit is a direct attempt to silence" efforts to raise awareness of the problem.

"People don’t want to see or be associated with hate, antisemitism, and the dangerous content that we all see proliferating on X," Ahmed said. "Musk is trying to ‘shoot the messenger’ who highlights the toxic content on his platform rather than deal with the toxic environment he’s created. The CCDH’s independent research won’t stop—Musk will not bully us into silence.”

X restructures its trust and safety team

The same day that X sued the CCDH, Musk and Yaccarino announced that they will both now oversee X's trust and safety team, Reuters reported.

The trust and safety team is responsible for content moderation, and both X executives have embraced X's current policy of limiting reach of offensive content rather than restricting it entirely. Under this strategy, it won't matter to Musk and Yaccarino how many posts include hate speech, focusing instead on limiting the total number of X users who view the offensive posts.

For groups like the CCDH—which would prefer not to see a single post including hate speech on the platform—this strategy will likely remain strongly criticized, and the volume of hate speech on the platform will likely continue to be disputed as each side continues using different metrics. It will be up to advertisers to decide which side's metrics are right, and X has an obvious stake in winning that fight.

It could be easier for X to win that battle if advocacy groups like the CCDH just didn't have access to X data. And at least one other organization that X is suing has alleged that X is currently aggressively working to cut off public access to data by suing over the same kind of data scraping that X accused the CCDH of participating in.

According to Or Lenchner—the CEO of the world's leading open data platform, Bright Data—X's lawsuit against Bright Data "is an effort to build a wall around publicly available data on Twitter" and "has no basis." Lenchner told Ars that Bright Data collects "public web data for more than 20,000 customers worldwide, including Fortune 500 companies, academic institutions, non-profits, NGOs and large social media networks" and that its practices are "fully compliant with the law."

"We are committed to making public data broadly available to everyone to benefit society and will vigorously defend our position in court to ensure the Internet remains accessible to all," Lenchner told Ars.

In its Bright Data complaint, X accused Bright Data of building "an illicit data-scraping business on the backs of innovative technology companies like X Corp."

Researchers have warned that the harder that X makes it to access X user data, the less the world will know about how the platform is working to shield users from harmful content.

Improving X's reputation as a safe platform for advertisers is critical to X's success, and that's likely why Yaccarino told X employees in an email that X is currently seeking to hire "a new leader for brand safety and suitability," Reuters reported.

Last June, the former head of then-Twitter's trust and safety team, Ella Irwin, resigned. Now, Yaccarino has told employees that three X executives will take her place: likely Musk, Yaccarino, and the new brand safety and suitability leader. These leaders will be tasked with proving that limiting the reach of harmful posts can make a platform just as safe as reducing the overall volume of harmful posts. X's blog explained why Musk's philosophy on brand safety is crucial to promote as much free expression on the platform as possible—but to embrace that philosophy requires users to trust in Musk's metrics.

"Free expression and platform safety are not at odds," X's blog said. "We are proving this every day through innovative enforcement policies that have helped reduce hateful content viewed on the platform."