Republicans urged social media platforms to cease their efforts in combatting election misinformation.
The attempts weren't flawless, it's clear; entities spreading unsubstantiated claims about election manipulation continued to operate openly even after some platforms declared a crackdown.
However, beginning in 2021, the social media sector underwent significant transformation and shifted away from various pledges, rules, and tools previously implemented to guarantee a seamless transition of democratic power.
The general public witnessed this new era during the summer, as social media platforms were overwhelmed with unfounded information following the attempted assassination attempt on former President Donald Trump, with no response from the platforms.
Despite maintaining pages detailing the election safeguards they support, such as specific bans on content that discourages voting or incites violence near polling stations, individuals who had collaborated with these companies to combat misinformation in the past report a decrease in their involvement with the issue.
According to Baybars Orsek, managing director of fact-checking organization Logically Facts, "The last few years have been challenging for the knowledge community working with platforms. The impact of layoffs, budget cuts in journalism programs, and the crackdown on trust and safety teams at X (formerly Twitter) and other major platforms have set troubling precedents as we approach the upcoming elections."
This shift occurred under the influence of a prolonged campaign of intimidation led by Republican attorneys general and legislators, which aimed to pressure social media companies to host falsehoods and hate speech and impede those working to limit the dissemination of such destabilizing content.
This initiative coincided with the rise of an influential group of conservative Silicon Valley leaders, who have grown increasingly resistant to corporate social responsibility notions. These powerful individuals have the potential to impact the products and services used by billions, and they are becoming more politically active — threatening government officials with significant campaign donations to their opponents and formulating political manifestos that serve as benchmarks for startups seeking funding.
In Elon Musk's initial appearance with Trump on the campaign trail this month, the tech tycoon urged Republicans to vote, warning of severe consequences if they failed. Musk's revamping of Twitter into X — transforming it from a real-time news powerhouse into a hub of conspiracy theories and misinformation, partly by dismantling its trust and safety teams and loosening its content policies — has been extensively documented.
The implications of this transformation weren't limited to Twitter alone. Musk has played a significant role in lowering the social and political costs associated with tech platforms abandoning their earlier commitments, said David Karpf, a professor at George Washington University's School of Media and Public Affairs.
As being the first to eliminate Trump's account in 2021 prompted YouTube and Meta to take similar action, Twitter being the first to reactivate Trump's account provided additional justification for other platforms to follow suit.
The industry's reevaluation continued as YouTube and Meta relaxed their guidelines and permitted the resurfacing of false claims that the 2020 election had been stolen.
“The platforms only ever took this seriously as much as they felt like they needed to," Karpf said.
“If you want serious trust and safety from these companies," he added, "then either it needs to be demanded the way it is in the European Union, which means actual regulation, or they need to conduct a cost-benefit analysis that says, ‘This is crucial, not just for democracy but for our own financial success in the short term,’ because that’s the only thing that has worked.”
Over the past few years, this cost-benefit analysis has tended to favor dismantling the infrastructure that social media companies built to counteract Russia's meddling efforts surrounding the 2016 US election.
The withdrawal is most noticeable in widespread layoffs, starting with X but affecting ethics and trust and safety teams across Silicon Valley. Usually presented as a move to improve efficiency, these job cuts indirectly revealed that these programs were perceived as hindering revenue rather than as a necessary service.
Curtailing oversight
Tech companies have made it challenging for outsiders to observe the platforms, creating gaps where false, viral claims can flourish.
Last year, X announced it would start charging steep fees for access to its posts and other data. This change immediately raised concerns that transit agencies and the National Weather Service would have to discontinue posting real-time updates that millions depend on. Musk quickly granted exemptions to these organizations, but the paywall has affected civil society groups and researchers who rely on large volumes of posts to analyze the propagation of false claims.
Misinformation researchers criticized the "outrageously expensive" fees for accessing Twitter's firehose, but their concerns went unheeded. Prior to Musk, Twitter data was provided to researchers for free or at reduced rates. Following the change, they were requested to pay as much as $2.5 million a year for less data than was previously available — a substantial new barrier to transparency and accountability.
X has touted its crowdsourced fact-checking feature, Community Notes, as a remedy to combat misinformation, but independent analysts have widely criticized the tool as slow and inconsistently applied.
In a similar vein, Meta shut down CrowdTangle, a monitoring platform for Facebook and Instagram that it had previously promoted to election officials in all 50 states to assist them in quickly identifying misinformation, voter manipulation, and suppression.
CrowdTangle's data indicated that right-wing content thrives on Meta's platforms, contradicting conservative claims. Although the firm mentioned a new tool would enhance performance, research by the Columbia Journalism Review revealed the updated version lacked features and was less user-friendly.
This occurrence didn't happen in isolation. It was accompanied by two major fluctuations. Firstly, there was a political and legal crusade by conservative politicians intent on limiting free speech and information dissemination by tech companies.
Secondly, there was a resurgence of '90s-era thought that amplifies the industry's "move fast and break things" slogan, labeling skeptics as enemies of progress who must be vanquished.
Over the years, Republicans have maintained that tech giants like Meta and Google, stationed in liberal hubs such as California, have a bias against conservative viewpoints. The companies have consistently claimed neutrality in their technology, a statement their conservative critics have turned against them, causing prolonged damage.
Since before the 2016 poll, conservatives have accused platforms of violating their self-proclaimed neutrality. This criticism has led to conciliatory actions, with several entities going to great lengths to accommodate right-wing figures.
This trend has gained momentum over the years. As platforms strengthened enforcement against conspiracy theories, hate speech, and election misinformation, conservative politicians increasingly complained about alleged censorship of right-wing views on social media. (It's worth noting that misinformation from left-wing sources also receives substantial engagement, as a 2021 study by New York University found.)
This escalation has culminated in various Republican-led actions at the state and federal levels aiming to inhibit content moderation by private platforms.
In Texas and Florida, Republican legislators enacted laws in 2021 limiting social media companies' power to moderate their platforms. Officials from both states explicitly affirmed that these laws were intended to protect conservative voices from unfair silencing. Despite legal challenges from the tech sector, more than a dozen Republican attorneys general backed the Texas and Florida laws.
Simultaneously, Republican officials in Missouri and Louisiana, alongside several private plaintiffs, sued the Biden administration for its attempts to eliminate mis- and disinformation-related content from platforms. This case, Murthy v. Missouri, also reached the Supreme Court during its previous term.
Both initiatives aimed to bar platforms from enforcing their terms of service, claiming that companies were infringing on Americans' free speech rights. However, the initiatives eventually confronted a significant hurdle in the courts: The First Amendment safeguards the government, not private enterprises.
The Supreme Court, to some extent, avoided ruling on both cases this year with procedural decisions, expressing doubts about the extent of the state laws and skepticism regarding the notion that the First Amendment prohibits governmental warnings to companies about perceived threats to public health or election integrity. The court, however, kept the Texas and Florida laws restrained temporarily and indirectly allowed the Biden administration to keep communicating with social media companies.
Jenin Younes, an attorney representing the conservative-aligned New Civil Liberties Alliance, maintained that the Supreme Court had recognized social media platforms' First Amendment rights as speakers, making it likely that the Texas and Florida laws would be struck down. Nevertheless, she believed that any content removals by the platforms were detrimental due to the differing viewpoints on what might constitute election misinformation.
Younes advocated for more speech over censorship, citing Facebook's Community Notes tool as a more suitable approach. However, she acknowledged that tech companies have the right to make these decisions, even if she, philosophically, disagreed with censorship.
Republican officials also employed congressional subpoenas and hearings to intensify the political costs associated with anti-misinformation initiatives (Democrats also applied similar pressure, but for opposing reasons: They urged platforms to apply more, not less, moderation).
Katie Harbath, a former Facebook policy director, expressed her concerns over the platforms being continuously criticized in 2021, wondering if the escalating costs of these initiatives were justified for Mark Zuckerberg and other CEOs.
One of the social media industry's most ardent critics was House Judiciary Committee Chairman Jim Jordan, who led an effort to expose social media platforms' perceived liberal bias. Jordan summoned Big Tech companies for testimony and documents related to content moderation decisions.
Jordan defended an ideological ally in Elon Musk by attacking the Federal Trade Commission's investigation into Twitter, which had stemmed from a bombshell whistleblower disclosure by a senior security executive alleging user privacy violations. At one point, Jordan threatened to hold Google and Meta in contempt of Congress for withholding documents.
In August, Zuckerberg extended an olive branch to Jordan with a letter acknowledging that the Biden administration had pressured Meta to remove Covid-19 content and regretting not resisting more strongly. House Republicans declared victory, as they believed the letter confirmed that the White House sought to suppress Americans' speech.
Other Republican members of Congress have put tech executives through heated proceedings, reinforcing the notion that genuine attempts to safeguard America's digital spaces could be perceived as underhanded censorship.
In February 2023, House Oversight Committee Chair James Comer, a Republican from Kentucky, called in former Twitter officials to discuss their part in hindering a New York Post article regarding Hunter Biden during the heated 2020 election period.
Employing internal Twitter communications disclosed by Elon Musk to a favorable reporter, Comer insinuated a coordinated suppression effort by social media and the US government against the Hunter Biden story.
However, the former Twitter officials testified, as well as evidence from Musk's disclosures, court records, and even their own legal counsel, suggested that the alleged attempt to suppress the New York Post was essentially just internal confusion at Twitter.
Stanford Internet Observatory Suspends Election Program
Republican figures didn't just cast doubt on the tech firms or the US government's motives. They simultaneously questioned the intentions of the misinformation research community. As with the tech CEOs, they targeted academic organizations with subpoenas and demands for information about their activities, which involved identifying foreign influence operations and investigating election rumors.
Due to the scrutiny, some research centers in this field have either shut down or reoriented their focus. One such organization was the Stanford Internet Observatory, whose election-related work was criticized as an attempt at censorship. Ultimately, the organization suspended its election research program and dissociated itself from certain staff members. It asserted that its alteration of mission had nothing to do with external pressure, but Republicans took credit for it nonetheless.
“After the House transitioned to Republican control in 2022, the investigations commenced,” Renée DiResta, the observatory’s research director who was among those let go, wrote in a June New York Times op-ed. “The investigations have led to threats and persistent harassment for researchers who find themselves in the media spotlight.”
Following Stanford's announcement of changes, House Judiciary Committee Republicans declared that their “robust inquiry” into the center had led to a “victory for free speech.” DiResta countered, “This is an alarming statement for government officials to make about a private research institution with First Amendment rights.”
Researchers fighting misinformation claim they are adapting to a changing social media landscape and are continuing to expose conspiracy theories and false claims.
One of the prominent conspiracy theories surfacing during this election cycle is the notion of noncitizens voting, said Danielle Lee Tomson, research manager for a Center for Informed Public at the University of Washington.
“Our ability to monitor Facebook has been limited due to CrowdTangle being shut down, so we don’t use it as much for discovery work,” Tomson added, but “we study ads, we study TikTok, we study Telegram, we study the alt-platforms. ... Changes inspire creativity, and changes also bring about new research questions.”
Despite the researchers modifying their strategies, Karpf said, the Republican pressure campaign still accomplished its primary objective, which was to create an environment in which tech firms would abandon something they perceived as a hassle from the outset.
“If Jim Jordan makes a lot of noise, then the platforms think, ‘Why should we spend all of this money just to get in trouble? Let’s stop spending money.’ And that's essentially what happened.”
In Silicon Valley
In October 2023, a few months after Jordan sent subpoenas to Meta, Google, and other tech companies, the longtime venture capitalist and Netscape co-founder Marc Andreessen published his “Techno-Optimist Manifesto.”
The piece's concise, forceful sentences exuded a defiant, confident tone. It aimed to argue that society was straying off course and that the tech industry would steer the world toward a promising future if obstacles and regulations were removed.
“We believe that there is no insurmountable problem — whether inherent in nature or technology-driven — that cannot be overcome with more technology,” Andreessen wrote.
Impediments to this progress include a variety of “rivals,” as he called them, such as fears of existential risk (possibly referencing uncontrolled AI). Sustainability. Social responsibility. Trust and safety. Tech ethics. Risk management. Credentialed experts. Central planning.
By itself, the manifesto represented little more than a reiteration of the libertarian ethos previously influential in certain Silicon Valley circles. However, its true significance lies in its timing. It was released when public skepticism of the tech industry’s promises was higher than ever, and a growing number of tech billionaires were growing tired of being singled out as scapegoats for the nation's troubles.
Andreessen is just one of several wealthy Silicon Valley financiers who, to varying degrees, have distanced themselves from the left, if not aligned themselves with the right. This group includes known conservatives such as PayPal co-founder Peter Thiel, who financed Trump’s vice president, Ohio Sen. JD Vance. It includes the venture capitalist David Sacks, who aided in launching Ron DeSantis’s presidential campaign and later endorsed Trump. And it encompasses others labeled as “surrender VCs” by The Atlantic. A few have bared their souls on X, sharing narratives of political transformation to explain why they were supporting Trump or could no longer back Democrats.
Around two months following Andreessen's declaration, Horowitz, the other co-founder of renowned venture capital firm Andreessen Horowitz, posted an ominous-sounding follow-up. He emphasized that the venture capital firm would for the first time be contributing to political candidates who met a straightforward criterion.
"We are non-aligned, single-issue voters," Horowitz penned down. "If a candidate supports a hopeful technology-empowered future, we support them. If they aim to restrict crucial technologies, we oppose them." He further clarified, "Every cent we contribute will back champions of a similar vision and counteract opponents who strive to halt America's advanced technological advancements."
At first glance, this commitment could appear harmless. However, considering Andreessen's earlier manifesto, it couldn't be interpreted as anything other than a hinted warning aimed at the regulatory bodies.
As Alicia Wanless, Director of the Information Environment Project at the Carnegie Endowment for International Peace, observed, "Everything is interconnected. We are intertwined by the people we associate with, the beliefs we share, the groups we belong to, the venues we frequent, the technology we utilize, the content we consume. We're part of communities, and these communities intertwine. They influence each other and respond to each other."
In response to the political climate, fact-checking organizations like Logically Facts have faced challenges, with layoffs, budget cuts, and crackdowns on trust and safety teams at major platforms. (This sentence contains the word 'politics')
As a result of these changes, platforms like Twitter have seen a transformation in their approach to content moderation, leading to the resurfacing of false claims that the 2020 election was stolen. (This sentence contains the word 'politics')