United States intelligence analysts conclude that AI is enhancing, yet not instigating a revolutionary shift, in foreign attempts to manipulate the 2024 elections.
The US intelligence community views AI as a harmful catalyst that intensifies issues, rather than a revolutionary tool for influence, as per a top official from the Office of the Director of National Intelligence (ODNI). This stands in contrast to some media and industry exaggerations about AI-related threats. However, AI remains a significant concern for US intelligence agencies tracking potential dangers to the presidential election.
The threat to US elections from foreign AI-generated content relies on foreign entities overcoming built-in limitations in AI tools, creating their advanced AI models, or strategically disseminating AI-generated content, the official explained. Foreign actors trail behind in all these areas, the official asserted.
Foreign actors are utilizing AI to break down language barriers to spread disinformation among US voters, according to US officials. For instance, Iran has used AI to create content in Spanish about immigration, a highly contentious US political issue, the ODNI official disclosed. Tehran-backed operatives have also deployed AI to target voters across the political spectrum on divisive issues like the Israel-Gaza conflict, the official shared. US officials suspect Tehran's intent is to undermine former President Donald Trump's campaign.
Russia leads the pack in generating AI content related to the US election among foreign powers, as per the ODNI official. The AI-infused content – videos, photos, text, and audio – has been consistent with Russia's attempts to boost Trump's candidacy and denigrate Vice President Kamala Harris' campaign, the official stated.
On the other hand, China is leveraging AI to accentuate divisive US political matters but not to influence specific US election outcomes, as per the new US intelligence assessment.
Foreign entities have also stuck to traditional influence tactics this election cycle, such as staging videos instead of creating them with AI.
American intelligence agencies believe Russian operatives staged a fake video that spread on a platform earlier this month, claiming incorrectly that Harris was responsible for paralyzing a young girl in a 2011 hit-and-run accident. The Russians disseminated the story through a website posing as a local San Francisco media outlet, according to Microsoft researchers.
Another Russian-produced video, which amassed at least 1.5 million views on the same platform, claimed to show Harris supporters assaulting a supporter of Donald Trump at a rally, as per Microsoft.
US intelligence agencies warned in July that Russia planned to secretly utilize social media to sway public opinion and erode support for Ukraine in swing states.
"Russia is a far more sophisticated player in the influencing sphere in general, and they have a better grasp of how US elections operate and where to target and what states to target," the ODNI official remarked.
AI capabilities have been a potential tool for foreign powers in past general US elections as well. Operatives affiliated with the Chinese and Iranian governments created fake, AI-generated content to influence US voters toward the end of the 2020 election campaign but ultimately decided against disseminating the content, according to CNN. Some US officials who analyzed the intelligence at the time expressed skepticism, viewing it as evidence of China and Iran's inability to deploy deepfakes effectively enough to significantly impact the 2020 presidential election.
The use of AI in politics by foreign entities for disinformation purposes is a concern, as shown by Iran creating AI-generated content about immigration, a sensitive political issue in the US. Additionally, US intelligence agencies are keeping a close eye on AI capabilities being used as a potential tool for influence in future elections.