One of the major storylines leading up to the 2024 election was the potential for artificial intelligence (AI)-generated deepfakes and misinformation to deceive the public and disrupt the election process. These concerns drove lawmakers in 20 states to pass new restrictions on the use of AI in deceptive election communications, and there were similar policy proposals introduced at the federal level. Now that the election has passed, it’s clear that AI was used to create and distribute deepfakes and other false information about candidates and the election process. However, these examples failed to generate any widespread disruption to the election for three main reasons.
First, the public was inundated for months with information about the risks of being deceived by AI-generated deepfakes. This included tens of thousands of news articles, public service announcements from law enforcement agencies and A-list celebrities, and efforts by state and local election officials to establish themselves as trusted sources of information about the election process. At the same time, technology companies agreed in February 2024 to take additional steps to protect the election from harmful AI impacts, including through increased public awareness. Combined, these actions largely eliminated the element of surprise that could allow a well-timed deepfake to cause meaningful disruption.
Additionally, AI deepfake technology is still a work in progress. Although the technology has accelerated rapidly, it remains detectable—particularly when attempting to depict false information about high-profile individuals and organizations. For example, it didn’t take long for deepfakes involving Vice President Kamala Harris, Taylor Swift or the Federal Bureau of Investigation to be detected and rebutted. These technologies will continue to improve over time, but they did not succeed in delivering any large-scale deception during this election cycle.
Finally, the conclusive outcome of the presidential race mitigated the significant risk that AI could be used to sow doubt in the post-election process of vote tabulation and certification. Donald J. Trump’s clear victory on election night, paired with Harris’ prompt concession the next day, quickly shut the door on further opportunities and incentives for continued public deception using AI deepfakes and misinformation.
Overall, AI’s impact on the 2024 election was minimal due to widespread public awareness, effective detection, and Trump’s relatively wide margin of triumph. With the election in the rearview mirror, policymakers should now take time to re-evaluate the AI restrictions approved by states over the past year to determine their necessity and effectiveness. AI deepfakes are here to stay, so understanding the efficacy of existing restrictions will be important for lawmakers determining how and whether to regulate the use of AI in election communications moving forward.
https://www.rstreet.org/commentary/three-reasons-ai-deepfakes-failed-to-disrupt-the-2024-election/
Deepfakes didn’t disrupt the election, but they’re changing our relationship with reality
Political deepfakes are becoming increasingly prevalent as the presidential election approaches, and they are deceiving more people, experts warn.
"This year, deepfakes are especially concerning, not only because they’re on the rise, but also because people are increasingly unsure of what’s real and what’s fake online," Karnik told FOX Business.
Karnik said more than 8 in 10 Americans encounter content monthly that they can’t immediately verify if it's real or AI-generated.
He noted that this trend is pronounced in battleground states like Michigan, where nearly 71% of people have been exposed to political deepfakes, according to the data. In Pennsylvania, 61% of respondents reported encountering political deepfakes.
Meanwhile, more than half of respondents reported encountering political deepfakes in Nevada, North Carolina and Wisconsin.
In other battleground states, including Arizona and Georgia, nearly half of respondents reported encountering political deepfakes, according to the data.
"In such a polarized election cycle, disinformation can easily exploit existing biases and make their impact even more significant," Karnik said.
Karnik contended that when people begin to doubt the truth of what they are seeing and hearing, "a constant sense of uncertainty takes hold, which makes it easier for false narratives to shape opinions—especially when the content aligns with pre-existing beliefs."
McAfee launched a 2024 Election AI Toolkit to help voters learn the basics of how to spot a deepfake.
https://www.fox32chicago.com/news/nearly-50-voters-say-deepfakes-had-some-influence-election-decision-survey
If you want to agonize over a 2024 election that will be upended by a viral falsehood or deepfake, stop worrying about Kamala Harris and Donald Trump and turn your eyes to the local candidates whose names you might not even know yet.
Fake images of Donald Trump posing with Black voters have caused fury online.
Images generated using artificial intelligence (AI) of Trump posing with Black people were created by supporters of the former president and shared online. The images are identifiable as artificially created because of discrepancies such as missing or incorrectly generated fingers and uncharacteristically smooth skin that is common in false images.
https://www.newsweek.com/donald-trump-deepfakes-black-voters-1875610
No political deepfake has alarmed the world’s disinformation experts more than the doctored audio message of U.S. President Joe Biden that began circulating over the weekend.
In the phone message, a voice edited to sound like Biden urged voters in New Hampshire not to cast their ballots in Tuesday’s Democratic primary. “Save your vote for the November election,” the phone message went. It even made use of one of Biden’s signature phrases: “What a bunch of malarkey.” In reality, the president isn’t on the ballot in the New Hampshire race — and voting in the primary doesn’t preclude people from participating in November's election.
Cidadãos em New Hampshire receberam um pedido político incomum durante o fim de semana de 20 a 21 de janeiro. Ligações automatizadas apresentando o que para muitos soava como a voz do presidente dos Estados Unidos, Joe Biden, diziam para eles não votarem na primária de 23 de janeiro.
How the FBI, NSA are preparing for deepfakes and misinformation issue ahead of 2024 elections
- More than half of the world’s population will cast votes this year, making election security a global risk.
- The role of A.I. in spreading misinformation that influences voters, including use of deepfakes, is a central concern.
- U.S. Cyber Command Commander, NSA Director and Central Security Service Chief General Paul Nakasone and FBI Director Christopher Wray spoke at Tuesday’s CNBC CEO Council Virtual Roundtable about the government’s approach to the issue.
2024 could be the ‘deepfake’ election. Few states are acting.
Analysis by Olivier Knox
with research by Caroline Anders
Dec
The big idea
2024 could be the ‘deepfake’ election. Few states are acting.
President Barack Obama warned us in 2018.
Or rather, a digitally manipulated video of Obama, voiced by actor-director Jordan Peele, warned us: “We’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.”
For millions of Americans, it was their first exposure to deepfakes — a word blending “deep learning” and “fake.” These are synthetic video, photo, or audio content, abetted by artificial intelligence, that convincingly imitate the way people look and so
As the 2024 election campaign heats up, there’s virtually no doubt that political actors of all stripes — including some deliberately trying to mislead voters — will turn to AI and deepfakes as the latest weapon in the political communications arsenal.
They will do so in an environment in which just a handful of states have passed laws designed to limit the forgeries’ influence, while Congress has introduced a few pieces of legislation that have gone nowhere, at least to date.
Michigan makes a move
Last week, Michigan Gov. Gretchen Whitmer (D) signed a package of elections-related legislation, including measures regulating the use of AI and deepfakes in elections. (The Daily 202 had flagged this weeks ago.)
Deepfakes come with a higher cost. The legislation forbids knowingly distributing materially deceptive media, generated by artificial intelligence, with the goal of harming the reputational or electoral prospects of a candidate, within 90 days of an election, by fooling voters into thinking that candidate said or did what the deepfake purports to show.
A first offense carries a maximum fine of up to $500 and up to 90 days in prison. Another violation within five years would be a felony carrying up to five years in prison and a fine of up to $1,000.
A small group of states, and where is Congress?
With Whitmer’s signature, Michigan joined a small group of states that have regulated the use of AI and deepfakes in political communications. Texas led the way in 2019, and has been followed by California, Minnesota, Texas and Washington, per the nonprofit Public Citizen, which has been tracking such developments.
Legislation has been introduced in Illinois, New Jersey, New York and Wisconsin.
That’s not a big list — and some big battlegrounds, like Pennsylvania, aren’t on it. But that’s still more action than has come out of Congress, where a batch of bills related to AI in elections has been introduced but nothing has been passed.
Senate Intelligence Committee Chairman Mark R. Warner (D-Va.) “is currently drafting two pieces of legislation that deal directly with the issue of deepfakes by raising penalties for the use of deep fakes to engage in voter intimidation or manipulate markets,” spokeswoman Rachel Cohen told The Daily 202.
Back in 2018 and 2019, Warner and Sen. Marco Rubio (Fla.), who is currently the top Republican on his committee, had warned about the dangers of deepfakes.
Rep. Yvette D. Clarke (D-NY) has a bill, the Deepfakes Accountability Act, mandating disclosures when someone produces “an advanced technological false personation record” for dissemination over the internet they must disclose that the content has been altered or AI-generated.
And a bipartisan group of senators led by Amy Klobuchar (D-Minn.) has backed the Protect Elections from Deceptive AI Act which would ban the use of “materially deceptive” AI-made video, audio or images related to federal candidates in political ads.
Deepfakes get scary good
If the Obama video seems a little clunky, consider that it’s five years old, before the latest AI tools, like ChatGPT were household names and available to pretty much anyone with an internet connection.
Back in March, my colleagues Isaac Stanley-Becker and Naomi Nix chronicled how AI-generated images of former president Donald Trump being arrested rocketed around the internet, getting millions of eyeballs.
In April, Isaac and John Wagner reported on the Republican National Committee releasing a 30-second ad built entirely with AI imagery of a dystopian future in which President Biden has been reelected and America is falling apart. (The RNC included a disclosure about AI use.)
Whether artificial intelligence or natural gullibility are a bigger problem in elections remains to be seen.
But Biden’s team is taking no chances: It recently announced a special task force with the mission of responding to deepfakes.
President Joe Biden’s 2024 campaign has assembled a special task force to ready its responses to misleading AI-generated images and videos, drafting court filings and preparing novel legal theories it could deploy to counter potential disinformation efforts that technology experts have warned could disrupt the vote.
Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.
The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.
https://edition.cnn.com/2023/11/08/tech/meta-political-ads-ai-deepfakes/index.html
Microsoft Is Offering to Help US Politicians Crack Down on Deepfakes
- Tool offering comes as concerns grow over fake photos, videos
- Microsoft says Russia is ‘huge’ cyber, disinformation threat
- https://www.bloomberg.com/news/articles/2023-11-08/microsoft-to-help-crack-down-on-ai-deepfakes-in-us-presidential-elections?embedded-checkout=true
- https://www.theverge.com/2023/11/8/23951955/microsoft-elections-generative-ai-content-watermarks
2024 elections expected to lead to more AI-generated campaign ads, deepfakes
Artificial intelligence and deepfake technology are threatening to further erode public trust in the electoral process, The Epoch Times reported October 27.
The Times article defines deepfakes as “highly convincing and deceptive digital media—typically videos, audio recordings, or images—increasingly generated using artificial intelligence, and often for misleading or fraudulent purposes.”
The 2024 US presidential race is officially underway. Candidates on both sides are gearing up for battle, arming themselves with campaign ads targeted at their competitors and press tours designed to sway voters in their direction.
Google to require politicians to disclose use of AI in election ads
AI-generated images and audio are already popping up in election ads around the world
Deepfake Political Ads Are ‘Wild West’ for Campaign Lawyers
Campaigns fear AI-powered deepfakes will interfere with elections
Legal remedies are limited and campaigns lack funds to fight
AI will change American elections, but not in the obvious way
How polarisation inoculates Americans against misinformation
Artificial intelligence (AI) generated deepfakes are likely to have a "massive" impact on voters in future elections and there isn't much that can be done right now to stop it, according to an AI advisor for the United Nations (UN).
Speaking with Fox News Digital, Neil Sahota said his sources warned the growing use of deepfake advertisements may very well be "the greatest threat to democracy."
With the marathon US presidential election season getting underway just as AI-generated fakery reaches new heights of believability, experts fear the confluence could stress-test public trust in media and politics.
Senator Malcolm Byrne has said artificial intelligence (AI) will be a central issue in the next general election, and urged political parties to agree to a code of conduct to avoid it being misused.
The newly established independent Electoral Commission will oversee the next general election, and Mr Byrne said that AI should be one of its top priorities.
“We are democratizing disinformation by allowing ordinary people to communicate in ways that could be completely fake,” said Darrell West, senior fellow at the Brookings Institution’s Center for Technology Innovation. “I expect that will be the Wild West of the 2024 election. There’s just going to be crazy content everywhere.”
Congress is still struggling to grasp the rapidly evolving technology. According to Senate Majority Leader Chuck Schumer, lawmakers are at least months away from introducing comprehensive legislation to mitigate AI’s most serious threats.
https://about.bgov.com/news/what-to-know-in-washington-ai-deepfakes-spark-election-fears/
Dozens of Democratic lawmakers are calling on the Federal Election Commission to consider cracking down on the use of artificial intelligence technology in political advertisements, warning that deceptive ads could harm the integrity of next year’s elections.
The group of 50 lawmakers, led by California Rep. Adam Schiff, said in a letter due to be sent to the FEC on Thursday that the agency should clarify that existing law against “fraudulent misrepresentation” in political ads also applies to the use of so-called deepfakes – fake videos and images created using AI.
Artificial Intelligence Brings 'Nightmare' Scenario to 2024 Presidential Campaign: Analysts.
With the 2024 presidential election fast approaching and zero federal regulations in place to combat false AI-generated political stunts, voters likely will be left questioning not only what they know but what they see.
https://www.usnews.com/news/the-report/articles/2023-07-07/artificial-intelligence-brings-nightmare-scenario-to-2024-presidential-campaign-analysts
jul23
With AI in the mix, the 2024 presidential race keeps getting weirder.
SOS America PAC, a Super PAC backing "Bitcoin mayor" Francis Suarez's 2024 presidential bid, just released a tool called "Ask AI Suarez," a realistic, AI-powered avatar of the candidate designed to answer questions about the current Miami mayor and his campaign.
The Federal Election Commission on Thursday was deadlocked on a request to develop regulations for AI-generated deepfake political ads.
Public Citizen, a nonpartisan advocacy group, submitted a petition last month asking the commission to establish rules, noting that advances in artificial intelligence have given political operatives the tools to produce campaign ads with computer-generated fake images that appear real. Such ads could misrepresent a candidate’s political views, a violation of existing federal law.
As Republican presidential candidate Ron DeSantis' campaign spreads fake images of 2024 GOP rival Donald Trump embracing former White House Coronavirus Task Force chief Anthony Fauci, a leading U.S. consumer advocacy group on Tuesday urged the Florida governor to take down and disavow the photos and pledge to stop using artificial intelligence-generate deepfake imagery going forward.
Agence France-Pressereported last week that images showing Trump hugging and kissing Fauci in a DeSantis campaign ad were likely deepfakes, sparking condemnation from Republicans who support the twice-impeached, twice-indicted former president's 2024 candidacy.
The video was shared on Twitter by "DeSantis War Room," a "rapid response" account launched last August.
An onslaught of high-quality, AI-generated political “deepfakes” has already begun ahead of the 2024 presidential election – and Big Tech firms aren’t prepared for the chaos, experts told The Post.
The rise of generative AI platforms such as ChatGPT and photo-focused Midjourney have made it easy to create false or misleading posts, pictures or even videos – from doctored footage of politicians making controversial speeches to bogus images and videos of events that never actually occurred.
Artificial Intelligence and Deepfakes in Strategic Deception Campaigns: The U.S. and Russian Experiences
The Malicious Use of Deepfakes Against Psychological Security and Political Stability
AI deepfakes are being weaponised in the race for US president - and Trump is the latest target
DeSantis Campaign Uses Apparently Fake Images to Attack Trump on Twitter
Deepfaking it: America's 2024 election collides with AI boom
Twice-impeached Republican presidential candidate Donald Trump shared an AI-generated “deepfake” video of gay CNN anchor Anderson Cooper reacting to Trump’s appearance at a recent Town Hall event. While the nine-second video delivers Trump’s usual brand of banal, crude self-aggrandizement, it also signals his willingness to share faked video content as part of his 2024 election campaign.
The video, which shows Anderson speaking immediately after Trump’s May 11 CNN Town Hall, has the anchor saying, “That was President Donald J. Trump ripping us a new ass**le here on CNN’s live presidential town hall. Thank you for watching. Have a good night.”
https://www.lgbtqnation.com/2023/05/trump-shared-a-disturbing-ai-video-of-gay-cnn-anchor-anderson-cooper/
A video that appears to show Hillary Clinton endorsing Ron DeSantis for president is spreading online as the Florida governor prepares to launch a 2024 US election bid. But the clip is manipulated; it includes fabricated audio dubbed over a December 2021 interview featuring the former secretary of state, who has criticized DeSantis's presidential ambitions.
Next year’s elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.
Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.
“The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern,” he said.
“Regulation would be quite wise: people need to know if they’re talking to an AI, or if content that they’re looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.”The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.
https://www.theguardian.com/technology/2023/may/20/elections-in-uk-and-us-at-risk-from-ai-driven-disinformation-say-experts
mai23
The head of the consumer advocacy group Public Citizen on Tuesday called on the two major U.S. political parties and their presidential candidates to pledge not to use generative artificial intelligence or deepfake technology "to mislead or defraud" voters during the 2024 electoral cycle.
Noting that "political operatives now have the means to produce ads with highly realistic computer-generated images, audio, and video of opponents that appear genuine, but are completely fabricated," Public Citizen warned of the prospect of an "October Surprise" deepfake video that could go viral "with no ability for voters to determine that it's fake, no time for a candidate to deny it, and no way to demonstrate convincingly that it's fake."
The watchdog offered recent examples of deepfake creations, including an audio clip of President Joe Biden discussing the 2011 film We Bought a Zoo.
https://www.commondreams.org/news/deepfake-2024
mio23
EXCLUSIVE: China’s expansive artificial intelligence (AI) operations could play a concerning role in the 2024 election cycle, Sen. Pete Ricketts warned on Thursday.
"There’s absolutely a possibility that they could do that for the 2024 election, and that's what we have to be on guard [for]," Ricketts told Fox News Digital in an interview in his Senate office.
During a Senate Foreign Relations subcommittee hearing earlier this month, Ricketts referenced China and its use of AI technology to create "deepfakes," which are fabricated videos and images that can look and sound like real people and events. A report released earlier this year by a U.S.-based research firm claimed a "pro-Chinese spam operation" was using AI deepfakes technology to create videos of fake news anchors reciting Beijing’s propaganda. Those videos were disseminated across social media platforms like Facebook, Twitter and YouTube, the report said. Meanwhile, China has its own regulations limiting the reach of deepfakes within its borders.
https://www.foxnews.com/politics/china-could-use-ai-deepfake-technology-disrupt-2024-election-gop-senator-warns
abr23
Deepfakes and political misinformation in U.S. elections
Sorell, Tom (2023) Deepfakes and political misinformation in U.S. elections. Techné: Research in Philosophy and Technology . ISSN 2691-5928. (In Press)
![]() | PDF WRAP-Deepfakes-and-political-misinformation-in-US-elections-Sorell-2022.pdf - Accepted Version Embargoed item. Restricted access to Repository staff only - Requires a PDF viewer. Download (378Kb) |
Abstract
Audio and video footage produced with the help of AI can show politicians doing discreditable things that they have not actually done. This is deepfaked material. Philosophers and lawyers have recently claimed that deepfakes along these lines have special powers to harm the people depicted and their audiences –powers that more traditional forms of faked imagery and sound footage lack. According to some philosophers, deepfakes are particularly “believable”, and the technology that can produce them is or will soon be widely available, so that deepfakes will proliferate. I first give reasons why deepfake technology is not particularly well suited to producing “believable” political misinformation in a sense to be defined. Next, I challenge the two most prominent philosophical claims –from Don Fallis and Regina Rini--about the consequences of the wide availability of deep fakes. My argument is not that deepfakes are harmless, but that their power to do major harm in liberal party political environments that contain sophisticated mass-media is highly conditional.
https://wrap.warwick.ac.uk/171005/
ab23
Joe Biden announced his reelection campaign this morning, and the Republican National Committee (RNC) has responded with an attack ad featuring AI-generated images.
The ad contains a series of stylistic images imagining Biden’s reelection in 2024. It suggests this will lead to a series of crises, with images depicting explosions in Taiwan after a Chinese invasion and military deployments on what are presumably US streets.
https://www.theverge.com/2023/4/25/23697328/biden-reelection-rnc-ai-generated-attack-ad-deepfake
ab23
A challenge in political media in 2024 will certainly include how we navigate artificial intelligence. With new advancements in AI technology, a big concern is its use in social media and contribution to misinformation — it is rapidly evolving and easy to make convincing audio, images or text. A recent example is the fake AI-generated images of former president Donald Trump being arrested that circulated on social media.
https://www.washingtonpost.com/politics/2023/04/17/ai-deep-fake-technology-election-2024/
ab23
No one wants to be falsely accused of saying or doing something that will destroy their reputation. Even more nightmarish is a scenario where, despite being innocent, the fabricated "evidence" against a person is so convincing that they are unable to save themselves. Yet thanks to a rapidly advancing type of artificial intelligence (AI) known as "deepfake" technology, our near-future society will be one where everyone is at great risk of having exactly that nightmare come true.
Deepfakes — or videos that have been altered to make a person's face or body appear to do something they did not in fact do — are increasingly used to spread misinformation and smear their targets. Political, religious and business leaders are already expressing alarm by the viral spread of deepfakes that maligned prominent figures like former US President Donald Trump, Pope Francis and Twitter CEO Elon Musk. Perhaps most ominously, a deepfake of Ukrainian President Volodomyr Zelenskyy attempted to dupe Ukrainians into believing their military had surrendered to Russia.
https://www.salon.com/2023/04/15/deepfake-videos-are-so-convincing--and-so-easy-to-make--that-they-pose-a-political/
mar23
AI deepfakes could advance misinformation in the run up to the 2024 election
mar23
If you're wondering whether you somehow missed the news event of the decade you can be assured none of this actually happened, something Higgins was entirely clear about. Instead, the pictures were created using Midjourney, one of a growing number of artificial intelligence-based image creators, as an intellectual experiment. They were deepfakes, synthetic media created with the aid of artificial intelligence.
https://www.newsweek.com/deepfakes-could-destroy-2024-election-1790037
The danger of those Trump ‘deepfake’ images
The response to AI-generated viral images of the former president’s arrest signal the potential for disinformation campaigns that could sow media chaos, according to experts.
https://www.independent.co.uk/news/world/americas/us-politics/trump-arrest-deepfake-ai-images-b2307243.html