This blog originally appeared at SUPPORT THE GUARDIAN.
The presidential election is just three months away. What if the billionaire disputes the outcome? What if he concludes that democracy no longer holds value?

Just over three years ago, an insurrectionist mob connected online, gathered in Washington, stormed the Capitol, and threatened the vice-president with a noose. But those were the “good old days.” We’re now in a different reality—one where billionaires are no longer restrained.
Back in 2020, tech platforms, still reeling from public backlash, at least pretended to care. Twitter had more than 4,000 employees in “trust and safety,” focused on removing dangerous content and monitoring foreign influence. Facebook, despite resisting pressure, eventually banned political ads that aimed to undermine voting, while researchers worked to identify and flag harmful disinformation.
Despite vast numbers of Americans believing the 2020 election was stolen, and a violent mob nearly staging a coup, things have only worsened in the four years since.
While Kamala Harris is enjoying her “hot girl summer” and liberal America breathes a sigh of relief, the U.S. should shift its gaze to Britain. There, rioters fill the streets, cars burn, and rampant racism spreads unchecked across multiple platforms. Lies, fueled by algorithms, circulate long before the truth emerges, only to be sanitized by politicians and media opportunists.
Just as Brexit foreshadowed Trump’s rise in 2016, Britain is once again a warning sign. The same patterns, tactics, and figures are appearing on both sides of the Atlantic—but now with even more dangerous technological weaknesses ready to be exploited.
For now, Britain’s streets are calm, and the violence suppressed. But in the U.S., where militias roam and open-carry laws are commonplace, the threat is much greater. No matter how well Harris performs in the polls, the U.S. is on the brink of an extraordinarily dangerous moment—no matter who wins the election.
As Trump and Bolsonaro have shown, it’s no longer just about winning elections or a single day. The period between election results and inauguration has become a volatile, anything-can-happen moment—not just for the U.S., but for the world.
In Britain, we’ve already seen the warning signs. This summer, we witnessed something unprecedented: a billionaire tech owner publicly challenging an elected leader, using his platform to undermine authority and incite violence. The 2024 summer riots in Britain were Elon Musk’s test run.
If Musk decides to “predict” a civil war in the U.S., what would that look like? He has already gotten away with it once. The sheer supranational power of this and the potential consequences should be terrifying. What happens if Musk contests an election result or deems democracy irrelevant? This isn’t science fiction—it’s a scenario just months away.
None of this is occurring in isolation. After 2016, there was a brief effort to understand how tech platforms had been exploited to spread lies and disinformation. But that moment has passed. A concerted, years-long effort by Republican operatives to politicize “misinformation” has succeeded. The term barely registers in U.S. tech circles today. Those who continue to raise the issue—academics, researchers, trust and safety teams—are labeled part of the “censorship industrial complex.”
A U.S. congressional committee led by Republican Jim Jordan, convinced that big tech silenced conservative voices, aggressively pursued emails from dozens of academics, chilling the entire field of research. Entire university departments, including the Stanford Internet Observatory’s election integrity unit, which played a key role in 2020, have collapsed.
Even the FBI was blocked from communicating with tech companies about an anticipated surge of foreign disinformation after a lawsuit from two attorneys general made its way to the Supreme Court. The New York Times reported that only recently has the FBI quietly resumed such efforts.
All of this has created the ideal conditions for tech platforms to quietly step back. Twitter—now X—has let go of at least half of its trust and safety team, but so have nearly all major tech companies. Thousands of employees once tasked with rooting out misinformation have been laid off by Meta, TikTok, Snap, and Discord.
Just last week, Facebook shut down one of its last transparency tools, CrowdTangle—a critical resource for understanding online activity during the tumultuous days surrounding the 2021 inauguration. Despite the protests of researchers and academics, Facebook axed it simply because they could.
Back in 2020, these efforts felt meager and inadequate against the growing threat. Now, they’ve disappeared just as the tools that spread misinformation are growing even more dangerous. OpenAI recently boasted about identifying an Iranian group that used ChatGPT for a U.S. election influence campaign, which might sound impressive if their trust and safety team hadn’t been disbanded in May after its co-founders resigned.
Musk, now the self-styled “Lord of Misrule,” has ripped off the pretense entirely. He’s shown that there’s no need to even act like you care. In his world, trust means mistrust, and safety means censorship. His goal is chaos—and it’s on the way.
(This article was amended on 22 August 2024 to correct a reference to the storming of the U.S. Capitol, which occurred just over three years ago, not four. The inauguration referenced should have been the 2021 inauguration, not 2020.)
— Carole Cadwalladr, reporter and feature writer for the Observer


You must be logged in to post a comment.