AI presents political crisis for 2024 with threat of confusing voters

Computer engineers and political scientists interested in the technology have warned for years that cheap, powerful artificial intelligence tools will soon allow anyone to create fake images, videos and audios to fool voters and perhaps influence elections. Was realistic enough.

The synthetic images that emerged were often crude, disjointed and expensive to create, especially when other types of misinformation were so cheap and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed a year or two away.

not anymore.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, videos and audios in seconds at minimal cost. When tied to powerful social media algorithms, this fake and digitally created content can spread far and fast and target highly specific audiences, potentially taking campaign dirty tricks to a new low. Can take

The implications for 2024 campaigns and elections are as vast as they are troubling: Generative AI could not only rapidly produce targeted campaign emails, text or videos, it could be used to mislead voters, impersonate candidates and Mass elections can also be done to undermine it. A motion not yet seen.

“We’re not prepared for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. “For me, the big leap forward is the audio and video capabilities that have emerged. When you can do this at scale, and distribute it across social platforms, it’s going to have a huge impact.

AI experts can quickly spot several dangerous scenarios in which generative AI is used to create synthetic media intended to mislead voters, discredit a candidate, or even incite violence.

Here are a few: automated robocall messages, in a candidate’s voice, instructing voters to cast a ballot on the wrong date; an audio recording of a candidate allegedly admitting guilt or expressing racist views; Video footage showing someone giving a speech or interview that they never gave. Fake photos made to look like local news reports falsely claim a candidate is out of the running.

“What if Elon Musk personally calls you and asks you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “Many people will listen. But it’s not him.

Former President Donald Trump, running in 2024, has shared AI-generated content with his followers on social media. A doctored video of CNN host Anderson Cooper that Trump shared on his Truth social platform on Friday, which distorted Cooper’s response to a CNN town hall with Trump last week, was created using an AI voice-cloning tool I went.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse of this digitally manipulated future. The online ad, which came after President Joe Biden announced his re-election campaign, and begins with a strange, slightly distorted image of Biden and the text “What if we re-elected the weakest president ever?” Chosen?”

What follows is a series of AI-generated images: Taiwan under attack; climbed storefronts in the United States as the economy crumbled; Soldiers and armored military vehicles patrol the local streets as waves of tattooed criminals and immigrants spread terror.

The description of the RNC ad said, “An AI-generated look at the country’s possible future if Joe Biden is re-elected in 2024.”

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, would not, said Petko Stoyanov, global chief technology officer at Forcepoint, a cybersecurity company based in Austin, Texas. Stoyanov predicted that interventionist groups in American democracy would employ AI and synthetic media as a way to erode trust.

“What happens if an international entity – a cyber criminal or a nation state – impersonates someone. What are the implications? Do we have any recourse?” Stoyanov said. “We are going to see a lot more misinformation from international sources.”

AI-generated political disinformation has already gone viral online ahead of the 2024 election, from an edited video of Biden giving a speech attacking transgender people to AI-generated images of children believed to be in libraries Devils are learning.

AI images showing Trump’s mug shot also fooled some social media users, even though the former president didn’t take one when he was booked in Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though their creator was quick to acknowledge their origin.

Legislation that would require candidates to label campaign ads created with AI has been introduced into the House by Representative Yvette Clarke, DNY, who also sponsored the legislation, which would require someone to watermark to indicate the fact To add, synthetic drawings will need to be made.

Some states have offered their own proposals to address concerns regarding deepfakes.

Clark said his biggest fear is that generative AI could be used ahead of the 2024 election to create a video or audio that incites violence and turns Americans against each other.

“It’s important that we keep up with technology,” Clarke told The Associated Press. “We have to set up some guardrails. People can be tricked, and it only takes a fraction of a second. People are busy with their lives and don’t have time to check every piece of information. In political weather AI being weaponized can be hugely disruptive.

Earlier this month, a trade association of political consultants in Washington condemned the use of deepfakes in political advertising, calling them “deceptions” with “no place in legitimate, ethical campaigns.”

Other forms of artificial intelligence to automate tasks such as targeting voters on social media or tracking donors have been a feature of political campaigns for years. Campaign strategists and tech entrepreneurs are hopeful that the most recent innovations will offer something positive in 2024 as well.

Mike Nellis, CEO of progressive digital agency Authentic, said he uses ChatGPT “every single day” and encourages his employees to use it as well, as long as any content created with the tool is verified. is reviewed by the human eye.

Nellis’ latest project, in partnership with Higher Ground Labs, is an AI tool called Quiller. It will compose, send and evaluate the effectiveness of fundraising emails—all typically difficult tasks on campaigns.

“The idea is every Democratic strategist, every Democratic candidate will have a co-pilot in his pocket,” he said.

(This story has not been edited by News18 staff and is published from a syndicated news agency feed)