X Is a White-Supremacist Site - The Atlantic
▻https://www.theatlantic.com/technology/archive/2024/11/x-white-supremacist-site/680538
Elon Musk has made one of Twitter’s most glaring problems into a core feature on X.
By Charlie Warzel
November 5, 2024
X has always had a Nazi problem. I’ve covered the site, formerly known as Twitter, for more than a decade and reported extensively on its harassment problems, its verification (and then de-verification) of a white nationalist, and the glut of anti-Semitic hatred that roiled the platform in 2016.
But something is different today. Heaps of unfiltered posts that plainly celebrate racism, anti-Semitism, and outright Nazism are easily accessible and possibly even promoted by the site’s algorithms. All the while, Elon Musk—a far-right activist and the site’s owner, who is campaigning for and giving away millions to help elect Donald Trump—amplifies horrendous conspiracy theories about voter fraud, migrants run amok, and the idea that Jewish people hate white people. Twitter was always bad if you knew where to look, but because of Musk, X is far worse. (X and Musk did not respond to requests for comment for this article.)
It takes little effort to find neo-Nazi accounts that have built up substantial audiences on X. “Thank you all for 7K,” one white-nationalist meme account posted on October 17, complete with a heil-Hitler emoji reference. One week later, the account, which mostly posts old clips of Hitler speeches and content about how “Hitler was right,” celebrated 14,000 followers. One post, a black-and-white video of Nazis goose-stepping, has more than 187,000 views. Another racist and anti-Semitic video about Jewish women and Black men—clearly AI-generated—has more than 306,000 views. It was also posted in late October.
Many who remain on the platform have noticed X decaying even more than usual in recent months. “I’ve seen SO many seemingly unironic posts like this on Twitter recently this is getting insane,” one X user posted in response to a meme that the far-right influencer Stew Peters recently shared. It showed an image of Adolf Hitler holding a telephone with overlaid text reading, “Hello … 2024? Are you guys starting to get it yet?” Peters appended the commentary, “Yes. We’ve noticed.” The idea is simply that Hitler was right, and X users ate it up: As of this writing, the post has received about 67,000 likes, 10,000 reposts, and 11.4 million views. When Musk took over, in 2022, there were initial reports that hate speech (anti-Black and anti-Semitic slurs) was surging on the platform. By December of that year, one research group described the increase in hate speech as “unprecedented.” And it seems to only have gotten worse. There are far more blatant examples of racism now, even compared with a year ago. In September, the World Bank halted advertising on X after its promoted ads were showing up in the replies to pro-Nazi and white-nationalist content from accounts with hundreds of thousands of followers. Search queries such as Hitler was right return posts with tens of thousands of views—they’re indistinguishable from the poison once relegated to the worst sites on the internet, including 4chan, Gab, and Stormfront.
The hatred isn’t just coming from anonymous fringe posters either. Late last month, Clay Higgins, a Republican representative from Louisiana, published a racist, threatening post about the Haitians in Springfield, Ohio, saying they’re from the “nastiest country in the western hemisphere.” Then he issued an ultimatum: “All these thugs better get their mind right and their ass out of our country before January 20th,” he wrote in the post, referencing Inauguration Day. Higgins eventually deleted the post at the request of his House colleagues on both sides of the aisle but refused to apologize. “I can put up another controversial post tomorrow if you want me to. I mean, we do have freedom of speech. I’ll say what I want,” he told CNN later that day.
And although Higgins did eventually try to walk his initial post back, clarifying that he was really referring to Haitian gangs, the sentiment he shared with CNN is right. The lawmaker can put up another vile post maligning an entire country whenever he desires. Not because of his right to free speech—which exists to protect against government interference—but because of how Musk chooses to operate his platform. Despite the social network’s policy that prohibits “incitement of harassment,” X seemingly took no issue with Higgins’s racist post or its potential to cause real-world harm for Springfield residents. (The town has already closed and evacuated its schools twice because of bomb threats.) And why would X care? The platform, which reinstated thousands of banned accounts following Musk’s takeover, in 2022—accounts that belong to QAnon supporters, political hucksters, conspiracy theorists, and at least one bona fide neo-Nazi—is so inundated with bigoted memes, racist AI slop, and unspeakable slurs that Higgins’s post seemed almost measured by comparison. In the past, when Twitter seemed more interested in enforcing content-moderation standards, the lawmaker’s comments may have resulted in a ban or some other disciplinary response: On X, he found an eager, sympathetic audience willing to amplify his hateful message.
His deleted post is instructive, though, as a way to measure the degradation of X under Musk. The site is a political project run by a politically radicalized centibillionaire. The worthwhile parts of Twitter (real-time news, sports, culture, silly memes, spontaneous encounters with celebrity accounts) have been drowned out by hateful garbage. X is no longer a social-media site with a white-supremacy problem, but a white-supremacist site with a social-media problem.
Musk has certainly bent the social network to support his politics, which has recently involved joking on Tucker Carlson’s show (which streams on X) that “nobody is even bothering to try to kill Kamala” and repurposing the @america handle from an inactive user to turn it into a megaphone for his pro-Trump super PAC. Musk has also quite clearly reengineered the site so that users see him, and his tweets, whether or not they follow him.
When Musk announced his intent to purchase Twitter, in April 2022, the New York Times columnist Ezra Klein aptly noted that “Musk reveals what he wants Twitter to be by how he acts on it.” By this logic, it would seem that X is vying to be the official propaganda outlet not just for Trump generally but also for the “Great Replacement” theory, which states that there is a global plot to eradicate the white race and its culture through immigration. In just the past year, Musk has endorsed multiple posts about the conspiracy theory. In November 2023, in response to a user named @breakingbaht who accused Jews of supporting bringing “hordes of minorities” into the United States, Musk replied, “You have said the actual truth.” Musk’s post was viewed more than 8 million times.
Though Musk has publicly claimed that he doesn’t “subscribe” to the “Great Replacement” theory, he appears obsessed with the idea that Republican voters in America are under attack from immigrants. Last December, he posted a misleading graph suggesting that the number of immigrants arriving illegally was overtaking domestic birth rates. He has repeatedly referenced a supposed Democratic plot to “legalize vast numbers of illegals” and put an end to fair elections. He has falsely suggested that the Biden administration was “flying ‘asylum seekers’, who are fast-tracked to citizenship, directly into swing states like Pennsylvania, Ohio, Wisconsin and Arizona” and argued that, soon, “everywhere in America will be like the nightmare that is downtown San Francisco.” According to a recent Bloomberg analysis of 53,000 of Musk’s posts, the billionaire has posted more about immigration and voter fraud than any other topic (more than 1,300 posts in total), garnering roughly 10 billion views.
But Musk’s interests extend beyond the United States. This summer, during a period of unrest and rioting in the United Kingdom over a mass stabbing that killed three children, the centibillionaire used his account to suggest that a civil war there was “inevitable.” He also shared (and subsequently deleted) a conspiracy theory that the U.K. government was building detainment camps for people rioting against Muslims. Additionally, X was instrumental in spreading misinformation and fueling outrage among far-right, anti-immigration protesters.
In Springfield, Ohio, X played a similar role as a conduit for white supremacists and far-right extremists to fuel real-world harm. One of the groups taking credit for singling out Springfield’s Haitian community was Blood Tribe, a neo-Nazi group known for marching through city streets waving swastikas. Blood Tribe had been focused on the town for months, but not until prominent X accounts (including Musk’s, J. D. Vance’s, and Trump’s) seized on a Facebook post from the region did Springfield become a national target. “It is no coincidence that there was an online rumor mill ready to amplify any social media posts about Springfield because Blood Tribe has been targeting the town in an effort to stoke racial resentment against ‘subhuman’ Haitians,” the journalist Robert Tracinski wrote recently. Tracinski argues that social-media channels (like X) have been instrumental in transferring neo-Nazi propaganda into the public consciousness—all the way to the presidential-debate stage. He is right. Musk’s platform has become a political tool for stoking racial hatred online and translating it into harassment in the physical world.
The ability to drag fringe ideas and theories into mainstream political discourse has long been a hallmark of X, even back when it was known as Twitter. There’s always been a trade-off with the platform’s ability to narrow the distance between activists and people in positions of power. Social-justice movements such as the Arab Spring and Black Lives Matter owe some of the success of their early organizing efforts to the platform.
Yet the website has also been one of the most reliable mainstream destinations on the internet to see Photoshopped images of public figures (or their family members) in gas chambers, or crude, racist cartoons of Jewish men. Now, under Musk’s stewardship, X seems to run in only one direction. The platform eschews healthy conversation. It abhors nuance, instead favoring constant escalation and engagement-baiting behavior. And it empowers movements that seek to enrage and divide. In April, an NBC News investigation found that “at least 150 paid ‘Premium’ subscriber X accounts and thousands of unpaid accounts have posted or amplified pro-Nazi content on X in recent months.” According to research from the extremism expert Colin Henry, since Musk’s purchase, there’s been a decline in anti-Semitic posts on 4chan’s infamous “anything goes” forum, and a simultaneous rise in posts targeting Jewish people on X.
X’s own transparency reports show that the social network has allowed hateful content to flourish on its site. In its last report before Musk’s acquisition, in just the second half of 2021, Twitter suspended about 105,000 of the more than 5 million accounts reported for hateful conduct. In the first half of 2024, according to X, the social network received more than 66 million hateful-conduct reports, but suspended just 2,361 accounts. It’s not a perfect comparison, as the way X reports and analyzes data has changed under Musk, but the company is clearly taking action far less frequently.
Because X has made it more difficult for researchers to access data by switching to a paid plan that prices out many academics, it is now difficult to get a quantitative understanding of the platform’s degradation. The statistics that do exist are alarming. Research from the Center for Countering Digital Hate found that in just the first month of Musk’s ownership, anti–Black American slurs used on the platform increased by 202 percent. The Anti-Defamation League found that anti-Semitic tweets on the platform increased by 61 percent in just two weeks after Musk’s takeover. But much of the evidence is anecdotal. The Washington Post summed up a recent report from the Institute for Strategic Dialogue, noting that pro-Hitler content “reached the largest audiences on X [relative to other social-media platforms], where it was also most likely to be recommended via the site’s algorithm.” Since Musk took over, X has done the following:
Seemingly failed to block a misleading advertisement post purchased by Jason Köhne, a white nationalist with the handle @NoWhiteGuiltNWG.
Seemingly failed to block an advertisement calling to reinstate the death penalty for gay people.
Reportedly run ads on 20 racist and anti-Semitic hashtags, including #whitepower, despite Musk pledging that he would demonetize posts that included hate speech. (After NBC asked about these, X removed the ability for users to search for some of these hashtags.)
Granted blue-check verification to an account with the N-word in its handle. (The account has since been suspended.)
Allowed an account that praised Hitler to purchase a gold-check badge, which denotes an “official organization” and is typically used by brands such as Doritos and BlackRock. (This account has since been suspended.)
Seemingly failed to take immediate action on 63 of 66 accounts flagged for disseminating AI-generated Nazi memes from 4chan. More than half of the posts were made by paid accounts with verified badges, according to research by the nonprofit Center for Countering Digital Hate.
None of this is accidental. The output of a platform tells you what it is designed to do: In X’s case, all of this is proof of a system engineered to give voice to hateful ideas and reward those who espouse them. If one is to judge X by its main exports, then X, as it exists now under Musk, is a white-supremacist website.
You might scoff at this notion, especially if you, like me, have spent nearly two decades willingly logged on to the site, or if you, like me, have had your professional life influenced in surprising, occasionally delightful ways by the platform. Even now, I can scroll through the site’s algorithmic pond scum and find things worth saving—interesting commentary, breaking news, posts and observations that make me laugh. But these exceptional morsels are what make the platform so insidious, in part because they give cover to the true political project that X now represents and empowers.
As I was preparing to write this story, I visited some of the most vile corners of the internet. I’ve monitored these spaces for years, and yet this time, I was struck by how little distance there was between them and what X has become. It is impossible to ignore: The difference between X and a known hateful site such as Gab are people like myself. The majority of users are no doubt creators, businesses, journalists, celebrities, political junkies, sports fans, and other perfectly normal people who hold their nose and cling to the site. We are the human shield of respectability that keeps Musk’s disastrous $44 billion investment from being little more than an algorithmically powered Stormfront.
The justifications—the lure of the community, the (now-limited) ability to bear witness to news in real time, and of the reach of one’s audience of followers—feel particularly weak today. X’s cultural impact is still real, but its promotional use is nonexistent. (A recent post linking to a story of mine generated 289,000 impressions and 12,900 interactions, but only 948 link clicks—a click rate of roughly 0.00328027682 percent.) NPR, which left the platform in April 2023, reported almost negligible declines in traffic referrals after abandoning the site.
Continuing to post on X has been indefensible for some time. But now, more than ever, there is no good justification for adding one’s name to X’s list of active users. To leave the platform, some have argued, is to cede an important ideological battleground to the right. I’ve been sympathetic to this line of thinking, but the battle, on this particular platform, is lost. As long as Musk owns the site, its architecture will favor his political allies. If you see posting to X as a fight, then know it is not a fair one. For example: In October, Musk shared a fake screenshot of an Atlantic article, manipulated to show a fake headline—his post, which he never deleted, garnered more than 18 million views. The Atlantic’s X post debunking Musk’s claim received just 28,000 views. Musk is unfathomably rich. He’s used that money to purchase a platform, take it private, and effectively turn it into a megaphone for the world’s loudest racists. Now he’s attempting to use it to elect a corrupt, election-denying felon to the presidency.
To stay on X is not an explicit endorsement of this behavior, but it does help enable it. I’m not at all suggesting—as Musk has previously alleged—that the site be shut down or that Musk should be silenced. But there’s no need to stick around and listen. Why allow Musk to appear even slightly more credible by lending our names, our brands, and our movements to a platform that makes the world more dangerous for real people? To my dismay, I’ve hid from these questions for too long. Now that I’ve confronted them, I have no good answers.
About the Author
Charlie Warzel is a staff writer at The Atlantic and the author of its newsletter Galaxy Brain, about technology, media, and big ideas. He can be reached via email.