A once-feared army general, who ruled Indonesia with an iron fist for more than three decades, has a message for voters ahead of upcoming elections – from beyond the grave.
“I am Suharto, the second president of Indonesia,” the former general says in a three-minute video that has racked up more than 4.7 million views on X and spread to TikTok, Facebook and YouTube.
While mildly convincing at first, it’s clear that the stern-looking man in the video isn’t the former Indonesian president. The real Suharto, dubbed the “Smiling General” because he was always seen smiling despite his ruthless leadership style, died in 2008 at age 86.
The video was an AI-generated deepfake, created using tools that cloned Suharto’s face and voice. “The video was made to remind us how important our votes are in the upcoming election,” said Erwin Aksa, deputy chairman of Golkar – one of Indonesia’s largest and oldest political parties. He first shared the video on X ahead of February 14 elections.~
https://edition.cnn.com/2024/02/12/asia/suharto-deepfake-ai-scam-indonesia-election-hnk-intl/index.html
The Presidency of Moldova on Friday denied statements attributed to President Maia Sandu in a new deepfake video made with the help of artificial intelligence, AI.
“In the context of the hybrid war against Moldova and its democratic leadership, the image of the head of state was used by falsifying the image and sound. The purpose of these fake images is to create mistrust and division in society, consequently to weaken the democratic institutions of Moldova,” a press release said.
Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.
The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.
The signatories include the Social Democratic Party, the Centre Party, the Green Party, the Liberal Green Party and the Protestant Party.
In a press release released on Monday, they asked their cantonal sections to stick to the code of conduct, which states that any use of AI during political campaigns (recorded adverts, posters, advertisements) must be explicitly declared. In addition, the parties forbid the use of AI in so-called“negative” campaigns, in which they attack their political opponents.
A survey published on Tuesday (19 September) revealed that 71% of German citizens and 57% of French citizens are concerned about the threats AI and deepfake technology poses to elections.
The survey conducted by Luminate, a philanthropic organisation founded by Pam and Pierre Omidyar, former Chairman of the Board of the Internet sales platform eBay, finds that more than half of German and French citizens are concerned about AI and deepfake threatening election results.
With a sample size of 1,008 French, 2,067 German and 2,156 British citizens, the survey took into account various political backgrounds, regions, genders and age groups.
On the topic of personal data protection, the results reveal that 81% of French citizens and 75% of German citizens also consider their right to object to social media platforms processing individuals’ personal data for advertising purposes important.
https://www.euractiv.com/section/disinformation/news/eu-citizens-see-ai-and-deepfakes-as-a-threat-for-next-elections-survey-finds/
Já ouviu falar em "deepfake"? Político na Sérvia foi vítima e "muitos acreditaram"
The Turkish deepfake porn video could change the future of elections
The purported video of Muharrem Ince has shifted the balance of the landmark election. Image manipulation like this is just the start...
https://www.telegraph.co.uk/news/2023/05/14/turkey-deepfake-elections-erdogan-muharrem-ince/
+ https://fortune.com/2023/05/15/turkeys-deepfake-influenced-election-spells-trouble/
Turkish President Recep Tayyip Erdogan’s main political opponent accused Russia of using deepfakes and other artificial intelligence (AI)-generated material to meddle in the country’s upcoming presidential election.
“The Russians have a vested interest in backing an Erdogan presidency to ensure that he basically stays in power, mainly because the Russians benefit [from] driving a wedge between Turkey and NATO, and they’ve been very successful about that in the last decade or so,” Sinan Ciddi, non-resident senior fellow on Turkey at the Foundation for Defense of Democracies, told Fox News Digital.
“So, in the last several days, weeks, it has been credibly reported by Turkish sources that Russian bot accounts, Twitter accounts, all sorts of disinformation campaigns have started pressing the thumb down on backing the Erdogan presidency, and that comes as no surprise.”
https://wfin.com/fox-world-news/deepfakes-porn-tapes-bots-how-ai-has-shaped-a-vital-nato-allys-presidential-election/
mai23
Turkish presidential candidate quits race after release of alleged sex tape
Muharrem İnce pulls out just days from close election race saying alleged sex tape is deepfake
https://www.theguardian.com/world/2023/may/11/muharrem-ince-turkish-presidential-candidate-withdraws-alleged-sex-tape
mai23
President Recep Tayyip Erdoğan shocked many during an election rally in İstanbul on Sunday when he played a deepfake video portraying the opposition as connected to a terrorist group.
Seeking to extend his two-decade-long rule in the presidential election on May 14, Erdoğan addressed his supporters at a big rally in İstanbul’s now-closed Atatürk Airport in Yeşilköy.
Erdoğan, who has been accusing the opposition and their joint presidential candidate Kemal Kılıçdaroğlu of working hand-in-hand with the outlawed Kurdistan Workers’ Party (PKK), stopped at one point to show a video without telling his supporters it had been altered.
https://www.turkishminute.com/2023/05/08/erdogan-play-deepfake-video-at-election-rally-to-show-opposition-linked-to-terrorist-group/
ab23
No one wants to be falsely accused of saying or doing something that will destroy their reputation. Even more nightmarish is a scenario where, despite being innocent, the fabricated "evidence" against a person is so convincing that they are unable to save themselves. Yet thanks to a rapidly advancing type of artificial intelligence (AI) known as "deepfake" technology, our near-future society will be one where everyone is at great risk of having exactly that nightmare come true.
Deepfakes — or videos that have been altered to make a person's face or body appear to do something they did not in fact do — are increasingly used to spread misinformation and smear their targets. Political, religious and business leaders are already expressing alarm by the viral spread of deepfakes that maligned prominent figures like former US President Donald Trump, Pope Francis and Twitter CEO Elon Musk. Perhaps most ominously, a deepfake of Ukrainian President Volodomyr Zelenskyy attempted to dupe Ukrainians into believing their military had surrendered to Russia.
https://www.salon.com/2023/04/15/deepfake-videos-are-so-convincing--and-so-easy-to-make--that-they-pose-a-political/
mar23
‘Noah’ and ‘Daren’ report good news about Venezuela. They’re deepfakes.
The avatars are the latest tool in Venezuela’s disinformation campaign, experts say
mar23
On February 27, 2023, a video of Biden announcing a new national draft, in which 20-year-olds would be conscripted into military service, began making its rounds on the internet.
If you’ve been paying attention to how Biden has been handling the conflict in Ukraine, this doesn’t seem too far-fetched. Billions of our tax dollars have trickled overseas under the watchful (or senile) eye of the president and directly into Ukraine’s outstretched hands.
https://www.theblaze.com/shows/the-glenn-beck-program/deep-fake-ai
mar23
U.S. Sen. Elizabeth Warren did not say Republicans should be restricted from voting in the 2024 presidential election, despite what a recent deepfake video making the rounds on social media appears to show.
The altered video, made with a clip from an MSNBC interview with the Democrat in December, purports to show Warren stating allowing Republicans to vote could threaten election integrity — a statement she never made.
The video, which was confirmed to have been made using artificial intelligence technology, garnered about 189,000 views on Twitter in a week’s time, according to Newsweek, which reports the clip appears to have originated on TikTok.
https://www.boston.com/news/politics/2023/03/02/elizabeth-warren-deepfake-video-fact-check/
fev23 (the first time we've seen a state-aligned operation use AI-generated video footage of a fictitious person to create deceptive political content)
WASHINGTON —The "news broadcasters" appear stunningly real, but they are AI-generated deepfakes in first-of-their-kind propaganda videos that a research report published Tuesday attributed to Chinese state-aligned actors.
The fake anchors — for a fictitious news outlet called Wolf News — were created by artificial intelligence software and appeared in footage on social media that seemed to promote the interests of the Chinese Communist Party, U.S.-based research firm Graphika said in its report.
"This is the first time we've seen a state-aligned operation use AI-generated video footage of a fictitious person to create deceptive political content," Jack Stubbs, vice president of intelligence at Graphika, told AFP.
In one video analyzed by Graphika, a fictitious male anchor who calls himself Alex critiques U.S. inaction over gun violence plaguing the country. In the second, a female anchor stresses the importance of "great power cooperation" between China and the United States.
Advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images created out of thin air and people shown mouthing things they never said.
Last year, Facebook owner Meta said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.
There was no immediate comment from China on Graphika's report, which comes just weeks after Beijing adopted expansive rules to regulate deepfakes.
China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion."
The Chinese government has warned that deepfakes present a "danger to national security and social stability."
Graphika's report said the two Wolf News anchors were almost certainly created using technology provided by the London-based AI startup Synthesia.
The website of Synthesia, which did not immediately respond to AFP's request for comment, advertises software for creating deepfake avatars "based on video footage of real actors."
Graphika said it discovered the deepfakes on platforms including Twitter, Facebook and YouTube while tracking pro-China disinformation operations known as "spamouflage."
"Spamouflage is a pro-Chinese influence operation that predominantly amplifies low-quality political spam videos," said Stubbs.
"Despite using some sophisticated technology, these latest videos are much the same. This shows the limitations of using deepfakes in influence operations—they are just one tool in an increasingly advanced toolbox."
https://www.voanews.com/a/research-deepfake-news-anchors-in-pro-china-footage/6953588.htmlA video viewed thousands of times on TikTok appears to show former US leader Donald Trump endorsing Nigerian presidential candidate Peter Obi ahead of the country’s general elections in February. However, AFP Fact Check found the clip was digitally altered. It was posted on a TikTok account that uses artificial intelligence to make deepfakes of famous people. The original footage was shot during Trump's presidency while was holding a news conference on COVID-19 at the White House.
The video, viewed more than 69,000 times, shows Trump giving a press conference in the official White House Press Briefing Room.
A left-wing advocacy group used a Mark Zuckerberg deepfake to urge lawmakers to pass a tech regulation bill before the end of the year.
Demand Progress Action, which describes its mission as protecting the "democratic character of the internet," used deepfake technology to have an actor appear and talk exactly like the Meta CEO. In the video, the fake Zuckerberg ironically "thanks" Democratic leaders Chuck Schumer and Nancy Pelosi for “holding up new laws that hold us accountable” while showing links of the two to tech companies being targeted by the bill. The video clarifies that the Zuckerberg speaking is fake from the outset and shows the actor side by side with the impostor at the end.
Hours after Pakistan Tehreek-e-Insaf (PTI) Senator Azam Swati claimed that his wife was sent a private video, featuring the two of them, Federal Investigation Agency (FIA) on Saturday declared the video "fake" following a forensic analysis.
It said that the objectionable video, circulating on the internet, was analysed frame-by-frame which revealed that ‘deepfake technology’ was used to edit the video.
https://tribune.com.pk/story/2384813/private-video-of-swati-wife-made-using-deepfake-technology-fia
How Baby Shark is jeopardizing US elections
BRASIL
Em setembro, um vídeo adulterado do Jornal Nacional, principal noticiário da rede Globo de televisão, ganhou as redes sociais. Nele, os apresentadores William Bonner e Renata Vasconcellos mostravam os resultados de uma pesquisa de intenção de votos para a Presidência, mas os dados estavam invertidos sobre quem era o candidato favorito, tanto nos gráficos quanto nas falas dos apresentadores. No dia seguinte, o telejornal fez um esclarecimento alertando que o vídeo estava sendo usado para desinformar a população e afirmando que se tratava de deepfake, técnica que usa inteligência artificial para fazer edições profundas no conteúdo. Com ela, é possível, por exemplo, trocar digitalmente o rosto de uma pessoa ou simular sua voz, fazendo com que ela faça o que não fez ou diga o que não disse.
Em agosto, outro vídeo do telejornal com edição semelhante, que também invertia os resultados de uma pesquisa para a Presidência, foi postado na rede social de vídeos TikTok, onde alcançou 2,5 milhões de visualizações, segundo o Projeto Comprova, iniciativa que reúne jornalistas de 43 veículos de comunicação do país para checar desinformação.
“Pode ser que tenha sido usada alguma técnica de deepfake nesses vídeos, mas é preciso uma análise mais detalhada. Para nós, o importante é saber que são falsos”, observa o cientista da computação Anderson Rocha, diretor do Instituto de Computação da Universidade Estadual de Campinas (Unicamp), onde coordena o Laboratório de Inteligência Artificial (Recod.ai). O pesquisador tem estudado maneiras de detectar adulterações maliciosas em fotos e vídeos, inclusive em deepfakes, também chamadas de mídia sintética.~
https://revistapesquisa.fapesp.br/deepfakes-o-novo-estagio-tecnologico-das-noticias-falsas/
Scholars and political observers have raised concerns over public opinion maneuvering on social media in Southeast Asia as three countries in the region – the Philippines, Malaysia and Indonesia, are gearing up for elections.
Propagandists’ strategic maneuvering of public opinion on social media remains a dangerous threat to democracy in Southeast Asia. Over the years, strategic use of cybertroopers in Southeast Asian countries has been prominent, especially during the election periods.
chegada do ano eleitoral eleva a preocupação de profissionais da segurança da informação com os vídeos deepfake — que alteram rostos e vozes de pessoas com resultados verossímeis, por meio de inteligência artificial. Consultados pela CNN, esses especialistas fazem uma ressalva: a técnica de manipulação é extremamente trabalhosa e complexa, o que dificulta sua produção massiva como ferramenta de desinformação nas eleições. “[A alta disseminação] É algo mais hipotético do que real”, defende Mike Price, diretor de tecnologia da ZeroFox, companhia de segurança da informação que atua na detecção de deepfakes. https://www.cnnbrasil.com.br/tecnologia/deepfake-preocupa-especialistas-que-veem-tecnologia-incipiente-no-jogo-eleitoral-do-brasil/
Goa elections: Fake WhatsApp chat shared by AAP and Congress supporters
Armed with hours of specially-recorded footage of opposition People Power Party candidate Yoon Suk-yeol, the team has created a digital avatar of the frontrunner -- and set "AI Yoon" loose on the campaign trail ahead of a March 9 election.
But AI Yoon's creators believe he is the world's first official deepfake candidate -- a concept gaining traction in South Korea, which has the world's fastest average internet speeds.
https://www.france24.com/en/live-news/20220214-deepfake-democracy-south-korean-candidate-goes-virtual-for-votes
jan22
I have to state categorically that many methods will be used to lie and deceive people in 2023 general elections. Among these methods used will be the power of computation that deepfakes afford anyone with the skill and design prowess. Deepfakes are the most prominent form of what’s being called “synthetic media”: images, sound and video that appear to have been created through traditional means but that have, in fact, been constructed by complex software. Deepfakes have been around for years and, even though their most common use to date has been transplanting the heads of celebrities onto the bodies of actors in pornographic videos, they have the potential to create convincing footage of any person doing anything, anywhere. What this means is that I can create a video of you telling a lie and put your face on it and share it on social media.
https://www.thisdaylive.com/index.php/2022/01/15/deepfakes-will-dominate-2023-elections/
Researchers from the Massachusetts Institute of Technology (MIT) have put out a new report investigating whether political video clips might be more persuasive than their textual counterparts, and found the answer is... not really. (...) To gauge how effective this tech would be at tricking anyone, the MIT team conducted two sets of studies, involving close to 7,600 participants total from around the U.S. Across both studies, these participants were split into three different groups. In some cases, the first was asked to watch a randomly selected “politically persuasive” political ad (you can see examples of what they used here), or a popular political clip on covid-19 that was sourced from YouTube. The second group was given a transcription of those randomly selected ads and clips, and the third group was given, well, nothing at all since they were acting as the control group. The result? “Overall, we find that individuals are more likely to believe an event occurred when it is presented in video versus textual form,” the study reads. In other words, the results confirmed that, yes, seeing was believing, as far as the participants were concerned. But when the researchers dug into the numbers around persuasion, the difference between the two mediums was barely noticeable, if at all. LINK + https://www.pnas.org/content/118/47/e2114388118
nov21
FILIPINAS
The Commission on Elections (Comelec) over the weekend has warned candidates for the 2020 elections that they will be held accountable if they use deep fakes in their respective political campaigns. Deep fakes are videos that have been doctored or spliced to make it look that a certain personality is saying or doing something he or she did not say or do. Comelec spokesperson James Jimenez said that the integrity pledge signed by candidates include a section on deep fakes when they filed their Certificates of Candidacy in October. https://pia.gov.ph/news/2021/11/08/deep-fakes-not-allowed-for-2022-polls-comelec
nov21
TAIWAN The National Security Bureau (NSB) has established a task force dedicated to countering deepfakes used to influence elections or disrupt society in Taiwan from Chinese perpetrators. The move was prompted by the discovery of falsified photos or videos featuring Taiwanese leadership in 2018 released by the Chinese Communist Party (CCP), Chen Chin-kuang (陳進廣), deputy director-general of the NSB, said. He made the remark at a legislative interpellation on Wednesday (Nov. 3) regarding Taiwan’s response to 21st-century crime, reported Liberty Times. According to Chen, intelligence has pointed to such activity involving manipulated audio and visual content, which warranted the set-up of a special unit to tackle the potential threats to national security and electoral integrity using AI-generated technologies. https://www.taiwannews.com.tw/en/news/4333835
Jan22
aiwan's Investigation Bureau said Wednesday it is developing software to identify so-called "deepfakes," as part of its efforts to prevent the dissemination of false information by hostile foreign forces ahead of the local government elections later in the year. In a statement, the bureau said it is also collaborating with prosecutors to crack down on vote buying and the infiltration of foreign funds in the Nov. 26 local government elections. The information promulgated in "deepfakes" usually involves realistically manipulated fake videos and other presentations of public figures, using sophisticated machine-learning techniques, the bureau said, adding that it is working with tech firms to develop software to counter any such attempts ahead of the November elections. https://focustaiwan.tw/sci-tech/202201190028
Ab21
Netherlands politicians just got a first-hand lesson about the dangers of deepfake videos. According to NL Times and De Volkskrant, the Dutch parliament's foreign affairs committee was fooled into holding a video call with someone using deepfake tech to impersonate Leonid Volkov (above), Russian opposition leader Alexei Navalny's chief of staff. The perpetrator hasn't been named, but this wouldn't be the first incident. The same impostor had conversations with Latvian and Ukranian politicians, and approached political figures in Estonia, Lithuania and the UK. https://finance.yahoo.com/news/netherlands-deepfake-video-chat-navalny-212606049.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE4xP9kO2tigeGtjBkv14JWX_w8I-KMzGpdUd4vlBy_9tTLrRrWzqYOUUsMWYjYGVfqSByoBV9J4aNOP2ZLxwQglNVW1kVlIgni-TQMHUgMi6DUKm0XSaZg4VM3xUnwFHCA2MrW2fB1wrWBrnfIn7kv0HfB47uL1CdUv5AU35JUg + https://news.err.ee/1608190012/scammers-imitating-russian-opposition-trick-estonian-mps-with-deepfake
Varios parlamentarios europeos han sido engañados con esta tecnología, creyendo estar realizando una videollamada con la oposición rusa. Entre ellos se encuentra Rihards Kols, responsable de la comisión de asuntos exteriores de Letonia, Tom Tugendhat, presidente del comité de asuntos exteriores del Reino Unido, así como otros políticos de Estonia y Lituania, según describe The Guardian. https://www.xataka.com/robotica-e-ia/enganan-a-varios-politicos-europeos-a-traves-videollamada-deepfake-que-imitaba-a-opositor-ruso
Representantes políticos de países da União Europeia estão acusando a Rússia de utilizar deepfakes para imitar opositores em vídeo com "máscaras" digitais e realizar reuniões oficiais com parlamentares de outros países. Segundo o jornal The Guardian, foram várias as denúncias de videoconferências com alguém se passando por Leonid Volkov, que é chefe de gabinete de Alexei Navalny. Ele é o maior opositor do atual presidente da Rússia, Vladimir Putin. https://www.tecmundo.com.br/seguranca/216191-russia-acusada-usar-deepfakes-imitar-politicos-ue.htm
‘Deepfake’ that supposedly fooled European politicians was just a look-alike, say pranksters. Fear of deepfakes seems to have outpaced the technology itself. https://www.theverge.com/2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexusApri21
Africa do sul:
Stop believing your lying eyes: Deepfakes are coming, and they might reshape SA’s politics. https://www.dailymaverick.co.za/article/2021-04-11-stop-believing-your-lying-eyes-deepfakes-are-coming-and-they-might-reshape-sas-politics/Mar21
Nigéria: (FRACO): 2023 elections and danger of ‘deepfake’ technology to Nigerian democracy. https://guardian.ng/opinion/2023-elections-and-danger-of-deepfake-technology-to-nigerian-democracy/
No comments:
Post a Comment