"Terrifying": Expert outlines "endless" ways AI could "further fracture" elections and our democracy

“AI-generated content is now part of every major election, and particularly damaging in close elections”

By Areeba Shah

Staff Writer

Published January 29, 2024 5:45AM (EST)

Donald Trump | Fake News (Photo illustration by Salon/Getty Images)
Donald Trump | Fake News (Photo illustration by Salon/Getty Images)

Misinformation and disinformation have always posed a threat to elections, fostering distrust in the voting process and endangering election workers. But this year's elections are poised to be immensely more challenging with the production of AI-generated content now introducing a new complex dimension. 

Voters have already witnessed the impact AI can play in elections this year when a doctored audio message of President Joe Biden was pushed out to New Hampshire residents, discouraging them from voting in the state's presidential primary last week.

“Republicans have been trying to push nonpartisan and Democratic voters to participate in their primary. What a bunch of malarkey,” a digitally altered voice said. “We know the value of voting Democratic when our votes count. It’s important that you save your vote for the November election. We’ll need your help in electing Democrats up and down the ticket. Voting this Tuesday only enables the Republicans in their quest to elect Donald Trump again.”

As public mistrust in election integrity grows, AI tools’ ability to target voters creates more opportunities to sow doubt in the election process. With the assistance of AI, producing convincing and tailored narratives amplifies the speed and scale at which false information can spread raising concerns among misinformation experts about the unprecedented challenges election officials face. Fake videos, audio, and images have become easy to create using generative AI, yet they can be very difficult to detect.

In other election campaigns, we have seen “faked candidate content” spreading disinformation, Oren Etzioni, professor of Computer Science at the University of Washington and Founding CEO of the Allen Institute for AI, told Salon. Etzioni pointed to Slovakia’s recent election in which AI-generated deepfakes circulated on social media days before the hotly contested parliamentary election. In one, the far-right Republika party falsely depicted Progressive Slovakia leader Michal Šimečka announcing a plan to raise beer prices if elected, Wired reported. Another more troubling deepfake audio had Šimečka supposedly discussing election rigging, including buying votes from the country's marginalized Roma minority.

“AI-generated content is now part of every major election, and particularly damaging in close elections,” Etzioni said. 

Often, the aim is to discourage people from going to the polls by introducing false information or sowing distrust, he added.

Misinformation and disinformation is not a new problem by any means, but something we saw “in spades” in 2016 and 2020, Ben Winters, Electronic Privacy Information Center’s senior counsel who leads EPIC’s AI and Human Rights Project, told Salon. But AI just “supercharges and democratizes” the AI tools that can be used to cause this “devastating harm,” Winters said. He expects that widely available generative AI tools will play a “massive role” in terms of both foreign and domestic interference. This includes text, image and video creation. 

“One of the prominent ways people measure this is how many people believe a certain lie, but I think in actuality the effect is a lot more subtle and therefore harder to track despite being equally damaging.”

“I think it will be used by campaigns in both legitimate and misleading ways, but the bulk of the impact will be by outside users,” Winters said. “I think this is in addition to everything that was used before, and a lot of the technological tools that will have the most impact are data brokers and means of distribution like robotexts, robocalls or social media.”

AI-manipulated videos and audio can often be hard to differentiate from real ones for the untrained eye, presenting a unique challenge in the 2024 election. AI tools can create deepfakes making it easier to disseminate disinformation.

Researchers from the University of Amsterdam conducted an experiment to assess the impact of AI-generated disinformation on people’s political preferences by creating a deepfake video of a politician offending his religious voter base. The study revealed that religious Christian voters who were exposed to the deepfake video had more unfavorable attitudes toward the politician compared to those in the control group.

In the U.S., some campaigns have already experimented with the technology. Soon after President Biden announced his re-election bid, the Republican National Committee responded by creating an AI-generated video illustrating a dystopian version of the future if he wins a second term. The video includes AI-generated images depicting Biden and Vice President Kamala Harris celebrating at an Election Day party followed by imagined reports of international and domestic crises.

We need your help to stay independent

The presidential campaign for Florida Gov. Ron DeSantis employed similar tactics, releasing a video on social media that depicts images generated by AI to show former President Donald Trump hugging Dr. Anthony Fauci. The Democratic Party also tested the use of AI, writing fund-raising messages and quickly found that these drafts were often better at encouraging engagement and donations than copy written entirely by humans, The New York Times reported

The use of AI technology has “endless” potential implications for voter behavior and election outcomes, Winters pointed out, adding that “it’s really quite terrifying.”

AI-generated content provides an opportunity for any political actor to discredit opponents or fabricate political scandals to advance their agenda. This can ultimately lead to voters making misinformed decisions based on false information.

Depending on the content, people can “feel scared” to vote due to safety concerns, individuals could be misinformed about when and where to vote and in some cases, people might not vote for a specific candidate “out of fear or blatantly false information,” Winters explained. There is also a “risk” of people seeking donations to financially harm voters. 

“I think AI generated election-related information will really further fracture our already shaky information ecosystem,” Winters said.

Etzioni suggests we need improved technology for detecting deepfakes and better regulation to safeguard elections from the influence of AI-generated misinformation, pointing to a new law in Minnesota, which prohibits the misuse of manipulated video, images and audio that seek to influence elections

The regulations took effect last summer, making Minnesota the first state to address fears about the growing threats artificial intelligence poses to elections, Minnesota Secretary of State Steve Simon told CBS News.


Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.


The law prohibits the use of AI-generated content if it's created without the consent of the person, specifically when created with the intent of harming a candidate or influencing an election within 90 days of Election Day, the outlet reported. 

While the use of AI tools have introduced more opportunities to create and spread misinformation, propaganda and conspiracy theories during election periods, what’s more concerning is that there are only “few efforts” available to “track it or quantify it,” Winters pointed out. 

“One of the prominent ways people measure this is how many people believe a certain lie, but I think in actuality the effect is a lot more subtle and therefore harder to track despite being equally damaging,” Winters said. 

There needs to be a private right of action to enforce “unfair and deceptive practices” from bad actors that create and disseminate mis- and disinformation, he added. 

But in terms of cultural policy decisions, there needs to be a normalization of little media literacy practices, Winters said. This includes the “interstitials” for sharing election-related information on social media, only clicking on links that are actually from the source and using a “trustworthy .gov” for election-related misinformation. 

At the same time, it is crucial for federal, state and local election officials to “purposefully” share accurate information from “clearly authoritative sources to crowd out the false ones,” Winters explained. It’s also on regulators like the Federal Election Commission, the Federal Trade Commission, the Consumer Financial Protection Bureau and the Department of Justice among others to affirmatively make people aware they can report this type of information.


By Areeba Shah

Areeba Shah is a staff writer at Salon covering news and politics. Previously, she was a research associate at Citizens for Responsibility and Ethics in Washington and a reporting fellow for the Pulitzer Center, where she covered how COVID-19 impacted migrant farmworkers in the Midwest.

MORE FROM Areeba Shah


Related Topics ------------------------------------------

Ai Disinformation Election Elections 2024 Furthering Misinformation Politics