It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech
Abstract
Report on Deep Fakes and National Security
Deepfakes and fake news pose a growing threat to democracy, experts warn
The actual financial scam Grothaus describes involves fraudsters who used a voice recording of a CEO to call his accountant and get him to wire them $243,00. Embarrassing – but also only possible because of a pretty gullible interlocutor. The political case study he describes is of an amateur edit of a video that made it look as if Hollywood star Dwayne Johnson was humiliating Hillary Clinton in the run-up to the 2016 election. The video went viral in Magaland, but not because its authenticity was particularly persuasive. It just fitted with people’s existing biases.
That’s the thing about “disinformation”: it’s not really geared towards changing people’s minds. It’s about feeding them what they want to consume anyway. The quality of the deception is not necessarily the crucial factor. Will deepfakes change this? Will their mere existence destroy any vestiges of trust in a shared reality? Potentially. But one thing we do know is that the discourse that has grown up around this issue, rather than being something radically new, is part of a much older dynamic. https://www.theguardian.com/books/2021/dec/16/trust-no-one-inside-the-world-of-deepfakes-by-michael-grothaus-review-disinformations-superweapon
Netherlands politicians just got a first-hand lesson about the dangers of deepfake videos. According to NL Times and De Volkskrant, the Dutch parliament's foreign affairs committee was fooled into holding a video call with someone using deepfake tech to impersonate Leonid Volkov (above), Russian opposition leader Alexei Navalny's chief of staff. The perpetrator hasn't been named, but this wouldn't be the first incident. The same impostor had conversations with Latvian and Ukranian politicians, and approached political figures in Estonia, Lithuania and the UK. https://www.engadget.com/netherlands-deepfake-video-chat-navalny-212606049.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZGFya3JlYWRpbmcuY29tLw&guce_referrer_sig=AQAAAGtJPuVXkT6dt5uj-AXKfuESgeUS32XYSnhlheH4c1fP5q6sRc9GVLhsD_201gg2xW1UXAIyvwO2kV3Niwdjv2Kzl6M3Eu_LEubjvlOE_ojwlzUbiis6ugKOajJa0v1GrV0LrAWarzdHnLnCcdxtj20dh36ICUUAcyuJ3YqQ3F5_
jan21
New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.
Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense. https://digitalcontentnext.org/blog/2021/01/25/how-powerful-are-deepfakes-in-spreading-misinformation/
dez20
One of the most concerning consequences of disinformation for democracy is that it has the potential to create a crisis of legitimacy. Disinformation can reduce the legitimacy of policy outputs, election outcomes, government, democratic processes, and democracy as a belief-system through:
Tainting the preference formation phase of decision-making, potentially generating a trust deficit, or boosting an existing one, not just in government and governance processes, but also in fellow members of the polity. This may jeopardise crucial ingredients of democracy.
Stimulating widespread distrust of the veracity of information, leading to a ‘post-truth’ order where either anything goes, or correct information is disbelieved, resulting in political apathy.
Undermining political culture more broadly by corroding collective belief in democracy as an ideology https://blogs.lse.ac.uk/politicsandpolicy/digitisation-democracy/
?20
This growing phenomenon has been used in political scenarios to misinform the public on various debates [2]. For instance, the use of deepfake video by an Italian satirical TV show against the formal Prime Minister of Italy Matteo Renzi. The video shared on social media depicted him insulting fellow politicians. As the video spread online, many individuals started to believe the video was authentic, which led to public outrage [The Emerging Threats of Deepfake Attacks and Countermeasures Shadrack Awah Buo 10.13140/RG.2.2.23089.81762
dez20 [foi um cheapfake - Pelosi - que fez alertar para os perigos dos deepfakes?]
Despite bipartisan calls for the video to be taken down, a Facebook spokesperson confirmed that the videos will not be removed because the platform does not have policies that dictate the removal of false information [20]. Therefore, this has prompted world governments to look for ways to regulate the use of DT [7]. The Emerging Threats of Deepfake Attacks and Countermeasures; Shadrack Awah Buo 10.13140/RG.2.2.23089.81762
Nov20
“There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.” LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent people. For those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINK
nov20
El falso audio de González Laya y Bin Laden que es viral en WhatsApp: así se hace un 'fake' de voz. https://www.elespanol.com/omicrono/tecnologia/20201112/falso-audio-gonzalez-laya-bin-laden-whatsapp/535447162_0.html
nov20
GERAL: As Jane Lytvynenko, a senior reporter at BuzzFeed News, says in Episode 2, “Videos are easily taken out of context and miscaptioned, which is a very big problem for misinformation. And part of the reason why they’re a much bigger problem than something like a written article is that people very much lean towards the seeing-is-believing gut instinct.”
https://immerse.news/prepare-dont-panic-for-deepfakes-c77f9f683f30
nov20 (ressuscitar mortos)
On Thursday, shortly before Halloween and in the wake of a much derided, viral tweet exposing her immense wealth and privilege, Kim Kardashian West reacted to Kanye West’s surprise gift to her: a hologram of her dead father. In an Instagram post, Kardashian West wrote, “For my birthday, Kanye got me the most thoughtful gift of a lifetime. A special surprise from heaven. A hologram of my dad. It is so lifelike and we watched it over and over, filled with lots so tears and emotion. I can’t even describe what this meant to me and my sisters, my brother, my mom and closest friends to experience together. Thank you so much Kanye for this memory that will last a lifetime.” The Robert Kardashian hologram, which also relied on deepfake technology, delivered a special 40th birthday message to his daughter and also repeatedly dubbed Kanye West a genius. https://slate.com/technology/2020/11/robert-kardashian-joaquin-oliver-deepfakes-death.html
OUT20 (avisar antes de acontecer...)
The ruling Georgian Dream party says that the ‘radical opposition’ which has ‘zero chance of winning the parliamentary elections’ has plans to release deepfakes on election day, on Saturday. Kobakhidze says that the videos, which may depict members of the ruling party, including party founder Bidzina Ivanishvili, will aim to mislead voters. https://agenda.ge/en/news/2020/3319
out20/PERIGOS:
Deepfakes have 'already started WW3' online in dangerous 'corrosion of reality'. The nature of warfare is changing rapidly, and most of us aren't ready for the chaos and doubt created for when artificial intelligence (AI) can manipulate video to make any politician say anything https://www.dailystar.co.uk/news/latest-news/deepfakes-already-started-ww3-online-22858256
Deepfake a clear and present danger to democracy https://www.timeslive.co.za/sunday-times/opinion-and-analysis/2020-10-18-deepfake-a-clear-and-present-danger-to-democracy/
ou20
Deepfakes can help authoritarian ideas flourish even within a democracy, enable authoritarian leaders to thrive, and be used to justify oppression and disenfranchisement of citizens, Ashish Jaiman, director of technology and operations at Microsoft said on Thursday. “Authoritarian regimes can also use deepfakes to increase populism and consolidating power”, and they can can also be very effective to nation states to sow the seeds of polarisation, amplifying division in the society, and suppressing dissent, Jaiman added, while speaking at ORF-organised cybersecurity conference CyFy. Jaiman also pointed out that deepfakes can be used to make pornographic videos, and the target of such efforts will “exclusively be women”. How deepfakes can affect democracies https://www.medianama.com/2020/10/223-deepfakes-impact-democracies/
out20
China is likely to rely on artificial intelligence-generated disinformation content, such as deep fake and deep voice videos, as part of its psychological and public opinion warfare across the world, a new study by United States-based think tank Atlantic Council says. The Atlantic Council's Digital Forensics Lab (DFRLab) has published a new study analysing Chinese disinformation campaigns and recent trends which suggest that despite high success among the domestic audience base, the Chinese Communist Party (CCP) struggles to drive its message home on the foreign front. https://www.indiatoday.in/world/story/ai-driven-deep-fakes-next-big-tool-chinese-disinformation-campaign-study-1730903-2020-10-12
out20 reescrever a história
this new technology doesn’t just threaten our present discourse. Soon, AI-generated synthetic media may reach into the past and sow doubt into the authenticity of historical events, potentially destroying the credibility of records left behind in our present digital era.
https://www.fastcompany.com/90549441/how-to-prevent-deepfakes
“The existence of deepfake videos means that sometimes
real videos are taken as fakes. In January 2019, in the African nation
of Gabon, a video of President Ali Bongo, who had not made a public appearance
for several months, triggered a coup. The military believed the video was a
fake, although the president later confirmed it was real.” https://www.scmp.com/comment/opinion/article/3103331/us-elections-violence-india-threat-deepfakes-only-growing Ou seja, já não basta ser verdadeiro para se acreditar; há 10 anos isto
seria impossível, hoje duvidamos do que vemos e pomos em causa mesmo aquilo que
é verdade; com que consquências para a nossa vida em sociedade?
This election, simply reaching the polls to cast a vote is complicated. Foreign agents, bots, inaccurate tweets and White House attacks on the validity of elections can confuse voters. Cyberattacks can reach voters by email and phone, sending misleading information about polling places or mail-in deadlines, creating long lines at polling locations or shutting down polls in targeted communities. The risk of COVID-19 deters people from going to the polls, as the Spanish Flu did in elections a century ago. But what makes this year's election truly unique is the widespread use of mail-in ballots. "Today, forces are at work to make people not participate in the election by questioning the integrity of elections and saying the system is broken," said Christina Bellantoni, professor at the USC Annenberg School for Communication and Journalism and director of the Annenberg Media Center. "With so few people undecided about the upcoming presidential election, influencing just a handful of people on the margins can sway an election." https://phys.org/news/2020-09-deepfakes-fake-news-array-aim.html
A reduction of trust in news circulating online. In addition to changing our perceptions, tarnishing the reputation of celebrities, targeting victims for blackmailing, and influencing voting behavior, deepfakes are now eliminating the trust in news circulating online. Since deepfake technology is improving faster than many believed, the “realness” of such fake content is becoming more and more convincing.
Many deepfake videos can not only manipulate facial expression, but they can also now perform a myriad of movements, including head rotation, eye gaze and blinking, and full 3D head positions using generative neural networks. Never one to miss an opportunity, Google has claimed that its working on a system that has the ability of detecting deepfake videos. For this, they are creating deepfakes themselves, as stated in a blogpost on 24th September. They have created a large dataset of 363 real videos of consenting actors and an additional 3,068 falsified videos that will be utilized by researchers at the Technical University of Munich. Regardless, deepfakes already have made their mark and eliminated the trust factor of news circulating online. Among the most targeted sectors for deepfakes, divided by percentage include 62.7 percent by entertainment, 21.7 percent by fashion, 4.3 percent by sport, 4.1 percent by business, and by 4.0 percent politics. And, currently the most targeted countries include USA and UK (making up about 61 percent of the majority targeted), South Korea, India, and Japan. Of course, these numbers will keep increasing over time. https://www.itproportal.com/features/how-deepfakes-make-us-question-everything-in-2020/
Debating the ethics of deepfakes https://www.orfonline.org/expert-speak/debating-the-ethics-of-deepfakes/
Deepfake videos - videos that were manipulated to replace a person with someone else's likeness - are effective in influencing people to think more negatively about that person. And even relatively bad deepfakes can be very convincing, according to a study by the University of Amsterdam (UvA), NOS reports. With a deepfake video, you can for example record a video of you saying sexist statements, and then impose a celebrity's likeness over you in the video so that it looks like the celebrity said those things. Or you can take a video of the celebrity, and manipulate it to make them say things they never said. For their study, the Amsterdam researchers created a deepfake video of former CDA leader Sybrand Buma and showed it to 278 people. The group that saw the deepfake video thought more negatively about Buma afterwards than the group that watched the original video of him. The attitudes towards the CDA as a whole remained almost the same in both situations. Only 8 of the 140 people who saw the deepfake video raised doubts about its authenticity, UvA researcher Tom Dobber said to NOS. "And this one was not even perfect, you could see the lips moving crazily every now and then. It is remarkable that people fell for it completely." https://nltimes.nl/2020/08/24/deepfakes-convincing-effective-influencing-people-amsterdam-researchers-found +https://www.miragenews.com/would-you-fall-for-a-fake-video-uva-research-suggests-you-might/
Agências de notícias pró-Israel publicaram editoriais com autoria de articulistas inexistentes, sob a técnica conhecida como “deepfake”, que substitui o rosto das pessoas nas imagens, em ação descrita como “nova fronteira da desinformação”. Detalhes da “falsificação hiper-realista” foram revelados pela agência Reuters nesta semana. A reportagem da agência internacional enfim desvelou o mistério sobre a identidade do escritor sionista Oliver Taylor. https://www.monitordooriente.com/20200720-editoriais-de-agencias-sionistas-sao-assinados-por-deepfakes-uma-nova-fronteira-da-desinformacao/ No início deste mês, o jornal britânico The Daily Beast denunciou 46 redes de notícias conservadoras, incluindo algumas filiadas à comunidade judaica, que utilizaram dezenove autores inexistentes para propagar “furos” sobre o Oriente Médio, como parte de uma campanha de propaganda massiva cujo início parece datar de julho de 2019.
Jun20
Remote voting using video is best option, tech, cybersecurity experts tell PROC. 'It’s a lot easier to match a person’s face and voice, deepfakes notwithstanding, whereas when you are voting through an app, what the system is recording is not that you voted, but rather that somebody with your credentials voted,' says Aleksander Essex. https://www.hilltimes.com/2020/06/11/remote-voting-using-video-is-best-option-tech-cybersecurity-experts-tell-proc/252464
No mês passado, um grupo político na Bélgica divulgou um vídeo em profundidade do primeiro-ministro belga, fazendo um discurso que ligava o surto de Covid-19 a danos ambientais e pedia ações drásticas sobre as mudanças climáticas. Pelo menos alguns espectadores acreditavam que o discurso era real. Apenas a mera possibilidade de um vídeo ser um deepfake já pode gerar confusão e facilitar o engano político, independentemente de a tecnologia ter sido realmente usada. O exemplo mais dramático disso vem do Gabão, um pequeno país da África central. No final de 2018, o presidente do Gabão, Ali Bongo, não era visto em público havia meses. Havia rumores de que ele não era mais saudável o suficiente para o cargo ou mesmo que ele tinha morrido. Na tentativa de acalmar essas preocupações e reafirmar a liderança de Bongo sobre o país, seu governo anunciou que ele daria um discurso televisionado em todo o país no dia de Ano Novo. https://forbes.com.br/colunas/2020/06/por-que-o-mundo-nao-esta-preparado-para-os-estragos-que-as-deepfakes-podem-causar/
Mai20 Não são tão graves como se supunha?
"Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, agreed that deepfakes haven't proven as dangerous as once feared, although for different reasons. Hwang argued that users of "active measures" (efforts to sow misinformation and influence public opinion) can be much more effective with cheaper, simpler and just as devious types of fakes — mis-captioning a photo or turning it into a meme, for example. https://www.npr.org/2020/05/07/851689645/why-fake-video-audio-may-not-be-as-powerful-in-spreading-disinformation-as-feare?t=1589108903368
Mar20 Nova zelãndia VARIOS CASOS https://www.stuff.co.nz/technology/digital-living/120397261/deepfakes-new-zealand-experts-on-how-face-swap-could-turn-sinister
Mai20 Last month Sophie Wilmès, the prime minister of Belgium, appeared in an online video to tell her audience that the COVID-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Whether or not these two existential crises are connected, the fact is that Wilmès said no such thing. Produced by an organization of climate change activists, the video was actually a deepfake, or a form of fake media created using deep learning. Deepfakes are yet another way to spread misinformation – as if there wasn’t enough fake news about the pandemic already. https://journalism.design/les-deepfakes/extinction-rebellion-sempare-des-deepfakes/ https://viterbischool.usc.edu/news/2020/05/fooling-deepfake-detectors/
https://www.extinctionrebellion.be/en/tell-the-truth
No comments:
Post a Comment