Skip to content

Chaos on social media platforms after Trump shooting is a mess of their own making

The social media industry’s tepid response this week to the conspiracy theories floating wildly on their platforms about the assassination attempt on former President Donald Trump was part of a purposeful shift away from actively policing online speech – even in the face of potentially...

Republican candidate Donald Trump is seen with blood on his face surrounded by secret service...
Republican candidate Donald Trump is seen with blood on his face surrounded by secret service agents.

Chaos on social media platforms after Trump shooting is a mess of their own making

The reaction could not be more different from 2021, in wake of the country’s last brush with political violence. That does not bode well for the coming election – or its aftermath.

Three years ago, major online platforms including Meta, Twitter and YouTube took swift action to keep the attack on America’s democracy on January 6 from spiraling online – suspending thousands of accounts that had promoted election lies, extending a pause on political advertising and removing posts that praised the US Capitol attack, among other moves.

In stark contrast, however, social media platforms over the past few days have been overrun by false claims about the attempt on Trump’s life, ranging from baseless left-wing speculation that the incident had been “staged” for Trump’s own political benefit to right-wing conspiracy theories falsely suggesting “deep state” government agents or perhaps President Joe Biden himself had somehow orchestrated the attack.

The incident was not staged. The US Secret Service has described it as an assassination attempt and the Department of Homeland Security has acknowledged it as a security failure. Investigators are still searching for the shooter’s motive. And Biden condemned political violence following the attack, pledging an independent investigation into the security lapse as well.

Big Tech CEOs have universallyechoed Biden’s remarks. But amid the torrent of conspiracy theorizing, the social media platforms have been largely silent about their own role in how the event has played out online, reflecting a sharp departure from their previous hands-on approach to containing the spread of falsehoods that, left unchecked, could risk fueling further conflict.

None of the country’s largest social media platforms responded to repeated questions from CNN over multiple days this week about what actions they have taken in response to misinformation and conspiracy theories circulating about the Trump rally shooting. Meta, Google, TikTok and X did not respond. Only Snapchat issued a statement saying it is designed “differently from traditional social media” in that it doesn’t offer a curated news feed “where users can broadcast false information.”

The steadfast silence from tech platforms underscores how significantly industry giants have shifted in the past three years to embrace a more hands-off approach — in their latest attempt to juggle, in real time, competing values of free expression, online safety and political neutrality. And what the public experienced on social media in the moments after the attack on Trump is a sign of what’s to come, said Imran Ahmed, CEO of the Center for Countering Digital Hate (CCDH), a social media watchdog group that advocates for tighter regulation of the platforms.

“This should be a hint,” Ahmed said, “that the months ahead are going to be just as unedifying, as stultifyingly stupid and as confusing as the last few days have been on social media.”

How did we get here?

A dizzying array of factors have combined to create today’s more toxic information environment.

Most visibly, Elon Musk’s purchase of Twitter, now known as X, has transformed what was once the premier social media destination for breaking news into a messier, less trustworthy platform.

Musk has made multiple interlocking decisions around account verification, payments to creators, and whom to allow on the platform that have degraded users’ ability to trust each other and what they see there, misinformation researchers have long said. And he has erected barriers to independent accountability by forcing researchers to pay astronomical fees for access to the platform’s data.

Users on X are now being financially rewarded for posting the most inflammatory, engaging content regardless of its accuracy or truthfulness. Although Musk has touted X’s crowdbased fact-checking feature, Community Notes, as a bulwark against misinformation, it has been widely criticized as slow and ineffective. On Tuesday, CCDH published new research finding that of 100 top-performing conspiracy theory posts on X about Trump’s shooting, only five contained a Community Note refuting a false claim. The posts collectively garnered more than 215 million views on X, according to CCDH.

Soon after Musk bought the platform, he laid off roughly 80% of Twitter’s pre-acquisition headcount. The deep cuts affected the company’s trust and safety team responsible for safeguarding the platform and, in a separate move, Musk eliminated Twitter’s trust and safety council of outside experts.

But X wasn’t alone in reducing investments in trust and safety. Industrywide, companies cited tough macroeconomic conditions to justify sweeping layoffs that in some cases have hit trust and safety teams. In 2019, Snapchat had 763 employees working on trust and safety, a number that grew to more than 3,000 by 2021, the company told Congress this year. But by 2023, that figure had fallen 27% to 2,226. Meta and TikTok told lawmakers they each employ roughly 40,000 safety employees but did not disclose how that number has changed over time.

Social media companies retreated in other ways, too, from YouTube deciding to once again allow lies about the 2020 election on its platform to Meta deciding to stop amplifying news, politics and social issues in its curated feeds altogether.

“Meta decided that it can’t profitably deliver civic content,” said Laura Edelson, an assistant professor of computer science at Northeastern University and the co-director of Cybersecurity for Democracy, a research group focused on digital misinformation. “It can’t make a safe social media product that does politics and civic stuff, and so it just got out of that business.”

Baybars Orsek, managing director of the fact-checking organization Logically Facts, said these and other changes by social media platforms have made working with them in the last few years more challenging.

“It is concerning to see that some platforms have chosen to distance themselves from political discourse rather than enforce transparent and scalable policies to protect free speech while maintaining a safe information environment,” Orsek told CNN.

It isn’t just the companies’ own business decisions driving the shift. Since the 2020 election, researchers who study digital platforms have increasingly reported being harassed and intimidated, in some cases by US lawmakers who have baselessly alleged they are part of a government-led pressure campaign to silence right-wing speech on social media.

For years, some conservatives have said, the US government in regular meetings and emails with social media companies has pressured platforms to remove Covid-19 and election misinformation in violation of Americans’ First Amendment rights. Those claims reached a climax this year at the Supreme Court, where, in a closely watched ruling, a 6-3 majority declined to say that government efforts to persuade platforms to remove posts is unconstitutional.

The decision effectively means the US government can continue to flag misinformation threats to social media companies in the runup to the 2024 election. But it is still up to the companies to decide what to do with that information. And the Trump rally shooting now raises fresh questions about their willingness to act on it, especially against the backdrop of a sustained effort by right-wing social media critics to discredit trust and safety work and misinformation research.

How this all played out on Saturday

The combination of all these factors led to ripe conditions for a misinformation maelstrom surrounding the Trump assassination attempt.

The incident touched off an intense demand for information. Mainstream media outlets, taking care to report only credible answers, were initially slower to report what was happening than the breakneck pace of social media speculation. That led to what researchers call an information void: a gap between what is known and what audiences want to know.

Meta’s decision not to amplify news and politics in curated feeds – part of a broader industry retrenchment on trust and safety driven by internal business pressures and external cultural ones – may have helped prevent some people from going down algorithmically recommended rabbit holes, at least on company-owned platforms.

But in the minutes after the shooting, some Threads users complained that mentions of the incident could not immediately be found on the site, in direct contrast to X – where, thanks in part to Musk’s prior decisions – conspiratorial thinking was already flowing fast and free.

Meta’s pullback from promoting news content likely contributed to the information void, Edelson said, as it failed to sufficiently elevate authoritative reporting and likely drove users into the arms of conspiracy theorists on other platforms.

The decision highlights the difficult tradeoffs of managing a fast-moving information ecosystem, said Orsek.

“The lack of stringent content moderation on X has led to a proliferation of misinformation, whereas Meta’s more conservative strategy has resulted in less immediate information [on its platforms], both verified and unverified,” Orsek said.

(The Verge reported Saturday that Meta did appear to be surfacing news outlets’ reporting about the shooting in Facebook search results, but that it inconsistently showed conspiracy-related content at the top of its trending topic on Threads for the incident.)

Meanwhile, Edelson added, TikTok also emerged as a key spreader of misinformation due to the way its recommendation algorithm surfaced videos it believed would resonate with users, rather than primarily authoritative information.

For other social media users, the episode highlighted how X remains the platform of choice for following big breaking news events online, even if the quality of the information is degraded. And the near-instant calcification of pro-Trump and anti-Trump false narratives surrounding the shooting shows how Americans are primed to look at the same event through completely different lenses, irrespective of the underlying facts.

The eagerness with which people on the right and, increasingly, on the left have embraced misleading claims linked to the shooting is a worrying sign for civil discourse and democracy, some researchers say.

That includes Alicia Wanless, director of the Information Environment Project at the Carnegie Endowment for International Peace, who has been researching how people create their own realities from the information they consume and the technology they use.

“I’ve been finding a pattern that after new technologies change how humans can produce and share information,” Wanless said, it leads to clashes of different realities in which supporters of various sides and ideologies try to do ever more to convince audiences that their narrative is the truth.

Historically, if the tension between realities can’t be resolved, Wanless warned, it often leads to violence.

“All my case studies led up to conflict,” she said.

Social media platforms, following the Trump rally shooting, have been criticized for their lack of action in addressing false claims and conspiracy theories. This is in stark contrast to their proactive approach in 2021, where they suspended accounts, paused political advertising, and removed posts promoting election lies. (business, tech)

The degradation of users' ability to trust each other and the information they see on social media platforms can be linked to Elon Musk's decisions regarding account verification, payments to creators, and who to allow on Twitter. (business, tech)

Read also:

Comments

Latest

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria

Grave accusations levied against JVA staff members in Bavaria The Augsburg District Attorney's Office is currently investigating several staff members of the Augsburg-Gablingen prison (JVA) on allegations of severe prisoner mistreatment. The focus of the investigation is on claims of bodily harm in the workplace. It's

Members Public