From fabricated images of Donald Trump’s arrest to a video depicting a dystopian future under Joe Biden, the 2024 White House race faces a firehose of tech-enabled disinformation in what is widely billed as America’s first AI election.
Campaigners on both sides of the US political aisle are harnessing advanced tools powered by artificial intelligence, which many tech experts view as a double-edged sword.
AI programs can clone in an instant a political figure’s voice and create videos and text so seemingly real that voters could struggle to decipher truth from fiction, undermining trust in the electoral process.
At the same time, campaigns are likely to use the technology to boost operational efficiency in everything from voter database analysis to drafting fundraising emails.
A video released in June by Florida Governor Ron DeSantis’s presidential campaign purported to show former president Trump embracing Anthony Fauci, a favorite Republican punching bag throughout the coronavirus pandemic.
AFP’s factcheckers found the video used AI-generated images.
After Biden formally announced his reelection bid, the Republican Party in April released a video it said was an “AI-generated look into the country’s possible future” if he wins.
It showed photo-realistic images of panic on Wall Street, China invading Taiwan, waves of immigrants overrunning border agents, and a military takeover of San Francisco amid dire crime.
Other campaign-related examples of AI imagery include fake photos of Trump being hauled away by New York police officers and video of Biden declaring a national draft to support Ukraine’s war effort against Russia.
‘Wild West’
“Generative AI threatens to supercharge online disinformation campaigns,” the nonprofit Freedom House said in a recent report, warning that the technology was already being used to smear electoral opponents in the United States.
“Purveyors of disinformation are employing AI-generated images, audio, and text, making the truth easier to distort and harder to discern.”
More than 50 percent of Americans expect AI-enabled falsehoods will impact the outcome of the 2024 election, according to a poll published in September by the media group Axios and business intelligence firm Morning Consult.
About one-third of Americans said they will be less trusting of the results because of AI, according to the poll.
In a hyperpolarized political environment, observers warn such sentiments risk stoking public anger at the election process—akin to the January 6, 2021 assault on the US Capitol by Trump supporters over false allegations that the 2020 election was stolen from him.
“Through (AI) templates that are easy and inexpensive to use, we are going to face a Wild West of campaign claims and counter-claims, with limited ability to distinguish fake from real material and uncertainty regarding how these appeals will affect the election,” said Darrell West from the Brookings Institution.
‘Game changing’
At the same time, rapid AI advancements have also made it a “game changing” resource for understanding voters and campaign trends at a “very granular level”, said Vance Reavie, chief executive of Junction AI.
Campaign staff previously relied on expensive consultants to develop outreach plans and spent hours on drafting speeches, talking points and social media posts, but AI has made the same jobs possible within a fraction of that time, Reavie told AFP.
But underscoring the potential for abuse, when AFP directed the AI-powered ChatGPT to create a campaign newsletter in favor of Trump, feeding it the former president’s false statements debunked by US fact-checkers, it produced—within seconds—a slick campaign document with those falsehoods.
When AFP further prompted the chatbot to make the newsletter “angrier,” it regurgitated the same falsehoods in a more apocalyptic tone.
Authorities are scrambling to set up guardrails for AI, with several US states such as Minnesota passing legislation to criminalize deepfakes aimed at hurting political candidates or influencing elections.
On Monday, Biden signed an ambitious executive order to promote the “safe, secure and trustworthy” use of AI.
“Deep fakes use AI-generated audio and video to smear reputations… spread fake news, and commit fraud,” Biden said at the signing of the order.
He voiced concern that fraudsters could take a three-second recording of someone’s voice to generate an audio deepfake.
“I’ve watched one of me,” he said.
“I said, ‘When the hell did I say that?'”
© 2023 AFP
Citation:
White House 2024: AI threatens to ‘supercharge’ disinformation (2023, November 3)
retrieved 3 November 2023
from https://techxplore.com/news/2023-11-white-house-ai-threatens-supercharge.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.