Sunday, March 22, 2020

Consequências (desinformação na sociedade- politica; outras consquências [VER OUTRO ABAIXO repete] , jornalismo, manipulação)

out23

It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech

Michael Hameleers, Franziska Marquart

Abstract


Mis- and disinformation labels are increasingly weaponized and used as delegitimizing accusations targeted at mainstream media and political opponents. To better understand how such accusations can affect the credibility of real information and policy preferences, we conducted a two-wave panel experiment (Nwave2 = 788) to assess the longer-term effect of delegitimizing labels targeting an authentic video message. We find that exposure to an accusation of misinformation or disinformation lowered the perceived credibility of the video but did not affect policy preferences related to the content of the video. Furthermore, more extreme disinformation accusations were perceived as less credible than milder misinformation labels. The effects lasted over a period of three days and still occurred when there was a delay in the label attribution. These findings indicate that while mis- and disinformation labels might make authentic content less credible, they are themselves not always deemed credible and are less likely to change substantive policy preferences.

https://ijoc.org/index.php/ijoc/article/view/20777
~


jun23
Reasons To Doubt Political Deepfakes
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes
are perceived, and which potential motivations drive doubt in the believability of deepfakes
versus authentic videos. To better understand the audience’s perceptions of deepfakes, we ran an
online experiment (N=829) in which participants were randomly exposed to a politician’s textual
or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this
politician’s speech was forged to include a radical right-wing populist narrative. In response to
both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations
for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such
judgments. Key findings show that participants have a hard time distinguishing a deepfake from a
related authentic video, and that the deepfake’s content distance from reality is a more likely cause
for doubt than perceived technological glitches. Together, we offer new insights into news users’
abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy
interventions promoting accurate verification skills among the audience.
DOI: 10.1177/02673231231184703


set22
BRASIL
Nas últimas semanas, conteúdo do Jornal Nacional foi adulterado desta forma e compartilhado intensamente em redes sociais como o WhatsApp para desinformar os eleitores. Alguns dos mais compartilhados exibem áudio e vídeo adulterados para afirmar que o candidato à reeleição, Jair Bolsonaro, estaria à frente na pesquisa de intenção de voto do Ipec, o que é falso. A pesquisa mostrou o oposto do vídeo adulterado.
https://g1.globo.com/jornal-nacional/noticia/2022/09/19/deepfake-conteudo-do-jornal-nacional-e-adulterado-para-desinformar-os-eleitores.ghtml




jun22

Report on Deep Fakes and National Security

https://news.usni.org/2022/06/08/report-on-deep-fakes-and-national-security



abr22

Deepfakes and fake news pose a growing threat to democracy, experts warn

https://phys.org/news/2022-04-deepfakes-fake-news-pose-threat.html 

fev22
While troops remain poised in Russia and Belarus, staged near the Ukrainian border, those hoping to avoid a war see hopeful signs in Russia's apparent continuing openness to diplomacy. But tensions remain high, and the US warns that Russia is preparing deep fake provocations to supply a casus belli.
https://thecyberwire.com/stories/7c7c8daee6eb4bbba9a512bdec8bd680/deep-fakes-as-a-bogus-casus-belli


dez21


The actual financial scam Grothaus describes involves fraudsters who used a voice recording of a CEO to call his accountant and get him to wire them $243,00. Embarrassing – but also only possible because of a pretty gullible interlocutor. The political case study he describes is of an amateur edit of a video that made it look as if Hollywood star Dwayne Johnson was humiliating Hillary Clinton in the run-up to the 2016 election. The video went viral in Magaland, but not because its authenticity was particularly persuasive. It just fitted with people’s existing biases.
That’s the thing about “disinformation”: it’s not really geared towards changing people’s minds. It’s about feeding them what they want to consume anyway. The quality of the deception is not necessarily the crucial factor. Will deepfakes change this? Will their mere existence destroy any vestiges of trust in a shared reality? Potentially. But one thing we do know is that the discourse that has grown up around this issue, rather than being something radically new, is part of a much older dynamichttps://www.theguardian.com/books/2021/dec/16/trust-no-one-inside-the-world-of-deepfakes-by-michael-grothaus-review-disinformations-superweapon

no0v21
MALASIA
In June 2019, a grainy video proliferated throughout Malaysian social media channels that allegedly showed the country’s Economic Affairs Minister, Mohamed Azmin Ali, having sex with a younger staffer named Muhammad Haziq Abdul Aziz. Although Azmin insisted that the video was fake and part of a “nefarious plot” to derail his political career, Abdul Aziz proceeded to post a video on Facebook ‘confessing’ that he was the man in the video and calling for an investigation into Azmin. The ensuing controversy threw the country into uproar. Azmin kept his job, though, after the prime minister declared that the video was likely a deepfake—a claim several experts have since disputedhttps://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/


NOV21
NÂO SÂO TÂO PERIGOSOS COMO SE PENSAVA?
Researchers from the Massachusetts Institute of Technology (MIT) have put out a new report investigating whether political video clips might be more persuasive than their textual counterparts, and found the answer is... not really. (...) To gauge how effective this tech would be at tricking anyone, the MIT team conducted two sets of studies, involving close to 7,600 participants total from around the U.S. Across both studies, these participants were split into three different groups. In some cases, the first was asked to watch a randomly selected “politically persuasive” political ad (you can see examples of what they used here), or a popular political clip on covid-19 that was sourced from YouTube. The second group was given a transcription of those randomly selected ads and clips, and the third group was given, well, nothing at all since they were acting as the control group. The result? “Overall, we find that individuals are more likely to believe an event occurred when it is presented in video versus textual form,” the study reads. In other words, the results confirmed that, yes, seeing was believing, as far as the participants were concerned. But when the researchers dug into the numbers around persuasion, the difference between the two mediums was barely noticeable, if at all. LINK + https://www.pnas.org/content/118/47/e2114388118 + https://thenextweb.com/news/mit-research-shows-sad-reason-why-deepfakes-pose-little-threat-us-politics 

nov21

Netherlands politicians just got a first-hand lesson about the dangers of deepfake videos. According to NL Times and De Volkskrant, the Dutch parliament's foreign affairs committee was fooled into holding a video call with someone using deepfake tech to impersonate Leonid Volkov (above), Russian opposition leader Alexei Navalny's chief of staff. The perpetrator hasn't been named, but this wouldn't be the first incident. The same impostor had conversations with Latvian and Ukranian politicians, and approached political figures in Estonia, Lithuania and the UK. https://www.engadget.com/netherlands-deepfake-video-chat-navalny-212606049.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZGFya3JlYWRpbmcuY29tLw&guce_referrer_sig=AQAAAGtJPuVXkT6dt5uj-AXKfuESgeUS32XYSnhlheH4c1fP5q6sRc9GVLhsD_201gg2xW1UXAIyvwO2kV3Niwdjv2Kzl6M3Eu_LEubjvlOE_ojwlzUbiis6ugKOajJa0v1GrV0LrAWarzdHnLnCcdxtj20dh36ICUUAcyuJ3YqQ3F5_


mar21
In today’s virtually interconnected world, it is now cheaper, faster and less risky for malign foreign entities to conduct non-kinetic subversion of adversaries. This commentary aims to promote debate about whether digitisation has reshaped foreign interference or whether changes to the conduct of covert subversion operations simply mask what at its core is an unchanged and perennial fixture of geopolitics. It calls into question the concept of foreign interference in a world wherein the boundaries of foreign and domestic are beginning to dissolve in the digital theatre of battle.
https://www.tandfonline.com/doi/abs/10.1080/10357718.2021.1909534?journalCode=caji20


fev21
Estonian Intelligence: Russians will develop deepfake threats
https://www.euractiv.com/section/cybersecurity/news/estonian-intelligence-russians-will-develop-deepfake-threats/

jan21

New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.

Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense. https://digitalcontentnext.org/blog/2021/01/25/how-powerful-are-deepfakes-in-spreading-misinformation/


dez20
One of the most concerning consequences of disinformation for democracy is that it has the potential to create a crisis of legitimacy. Disinformation can reduce the legitimacy of policy outputs, election outcomes, government, democratic processes, and democracy as a belief-system through:
Tainting the preference formation phase of decision-making, potentially generating a trust deficit, or boosting an existing one, not just in government and governance processes, but also in fellow members of the polity. This may jeopardise crucial ingredients of democracy.
Stimulating widespread distrust of the veracity of information, leading to a ‘post-truth’ order where either anything goes, or correct information is disbelieved, resulting in political apathy.
Undermining political culture more broadly by corroding collective belief in democracy as an ideology https://blogs.lse.ac.uk/politicsandpolicy/digitisation-democracy/ 

?20

This growing phenomenon has been used in political scenarios to misinform the public on various debates [2]. For instance, the use of deepfake video by an Italian satirical TV show against the formal Prime Minister of Italy Matteo Renzi. The video shared on social media depicted him insulting fellow politicians. As the video spread online, many individuals started to believe the video was authentic, which led to public outrage [The Emerging Threats of Deepfake Attacks and Countermeasures Shadrack Awah Buo 10.13140/RG.2.2.23089.81762

dez20 [foi um cheapfake - Pelosi - que fez alertar para os perigos dos deepfakes?]

Despite bipartisan calls for the video to be taken down, a Facebook spokesperson confirmed that the videos will not be removed because the platform does not have policies that dictate the removal of false information [20]. Therefore, this has prompted world governments to look for ways to regulate the use of DT [7]. The Emerging Threats of Deepfake Attacks and Countermeasures; Shadrack Awah Buo 10.13140/RG.2.2.23089.81762

Nov20

There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINK


nov20

El falso audio de González Laya y Bin Laden que es viral en WhatsApp: así se hace un 'fake' de voz. https://www.elespanol.com/omicrono/tecnologia/20201112/falso-audio-gonzalez-laya-bin-laden-whatsapp/535447162_0.html

nov20

GERAL: As Jane Lytvynenko, a senior reporter at BuzzFeed News, says in Episode 2“Videos are easily taken out of context and miscaptioned, which is a very big problem for misinformation. And part of the reason why they’re a much bigger problem than something like a written article is that people very much lean towards the seeing-is-believing gut instinct.”

https://immerse.news/prepare-dont-panic-for-deepfakes-c77f9f683f30


nov20 (ressuscitar mortos)

On Thursday, shortly before Halloween and in the wake of a much derided, viral tweet exposing her immense wealth and privilege, Kim Kardashian West reacted to Kanye West’s surprise gift to her: a hologram of her dead father. In an Instagram post, Kardashian West wrote, “For my birthday, Kanye got me the most thoughtful gift of a lifetime. A special surprise from heaven. A hologram of my dad. It is so lifelike and we watched it over and over, filled with lots so tears and emotion. I can’t even describe what this meant to me and my sisters, my brother, my mom and closest friends to experience together. Thank you so much Kanye for this memory that will last a lifetime.” The Robert Kardashian hologram, which also relied on deepfake technology,  delivered a special 40th birthday message to his daughter and also repeatedly dubbed Kanye West a genius. https://slate.com/technology/2020/11/robert-kardashian-joaquin-oliver-deepfakes-death.html


OUT20 (avisar antes de acontecer...)

The ruling Georgian Dream party says that the ‘radical opposition’ which has ‘zero chance of winning the parliamentary elections’ has plans to release deepfakes on election day, on Saturday. Kobakhidze says that the videos, which may depict members of the ruling party, including party founder Bidzina Ivanishvili, will aim to mislead voters. https://agenda.ge/en/news/2020/3319 


out20/PERIGOS:


Deepfakes have 'already started WW3' online in dangerous 'corrosion of reality'. The nature of warfare is changing rapidly, and most of us aren't ready for the chaos and doubt created for when artificial intelligence (AI) can manipulate video to make any politician say anything https://www.dailystar.co.uk/news/latest-news/deepfakes-already-started-ww3-online-22858256

Deepfake a clear and present danger to democracy https://www.timeslive.co.za/sunday-times/opinion-and-analysis/2020-10-18-deepfake-a-clear-and-present-danger-to-democracy/

ou20

Deepfakes can help authoritarian ideas flourish even within a democracy, enable authoritarian leaders to thrive, and be used to justify oppression and disenfranchisement of citizens, Ashish Jaiman, director of technology and operations at Microsoft said on Thursday. “Authoritarian regimes can also use deepfakes to increase populism and consolidating power”, and they can can also be very effective to nation states to sow the seeds of polarisation, amplifying division in the society, and suppressing dissent, Jaiman added, while speaking at ORF-organised cybersecurity conference CyFy. Jaiman also pointed out that deepfakes can be used to make pornographic videos, and the target of such efforts will “exclusively be women”.  How deepfakes can affect democracies https://www.medianama.com/2020/10/223-deepfakes-impact-democracies/


out20

China is likely to rely on artificial intelligence-generated disinformation content, such as deep fake and deep voice videos, as part of its psychological and public opinion warfare across the world, a new study by United States-based think tank Atlantic Council says. The Atlantic Council's Digital Forensics Lab (DFRLab) has published a new study analysing Chinese disinformation campaigns and recent trends which suggest that despite high success among the domestic audience base, the Chinese Communist Party (CCP) struggles to drive its message home on the foreign front. https://www.indiatoday.in/world/story/ai-driven-deep-fakes-next-big-tool-chinese-disinformation-campaign-study-1730903-2020-10-12 


out20 reescrever a história

 this new technology doesn’t just threaten our present discourse. Soon, AI-generated synthetic media may reach into the past and sow doubt into the authenticity of historical events, potentially destroying the credibility of records left behind in our present digital era

https://www.fastcompany.com/90549441/how-to-prevent-deepfakes


The existence of deepfake videos means that sometimes real videos are taken as fakes. In January 2019, in the African nation of Gabon, a video of President Ali Bongo, who had not made a public appearance for several months, triggered a coup. The military believed the video was a fake, although the president later confirmed it was real.” https://www.scmp.com/comment/opinion/article/3103331/us-elections-violence-india-threat-deepfakes-only-growing Ou seja, já não basta ser verdadeiro para se acreditar; há 10 anos isto seria impossível, hoje duvidamos do que vemos e pomos em causa mesmo aquilo que é verdade; com que consquências para a nossa vida em sociedade?


set20

This , simply reaching the polls to cast a vote is complicated. Foreign agents, bots, inaccurate tweets and White House attacks on the validity of elections can confuse voters. Cyberattacks can reach voters by email and phone, sending misleading information about polling places or mail-in deadlines, creating long lines at polling locations or shutting down polls in targeted communities. The risk of COVID-19 deters people from going to the polls, as the Spanish Flu did in elections a century ago. But what makes this year's election truly unique is the widespread use of mail-in ballots. "Today, forces are at work to make people not participate in the election by questioning the integrity of elections and saying the system is broken," said Christina Bellantoni, professor at the USC Annenberg School for Communication and Journalism and director of the Annenberg Media Center. "With so few people undecided about the upcoming presidential election, influencing just a handful of people on the margins can sway an election." https://phys.org/news/2020-09-deepfakes-fake-news-array-aim.html

ago20

A reduction of trust in news circulating online. In addition to changing our perceptions, tarnishing the reputation of celebrities, targeting victims for blackmailing, and influencing voting behavior, deepfakes are now eliminating the trust in news circulating online. Since deepfake technology is improving faster than many believed, the “realness” of such fake content is becoming more and more convincing.

Many deepfake videos can not only manipulate facial expression, but they can also now perform a myriad of movements, including head rotation, eye gaze and blinking, and full 3D head positions using generative neural networks. Never one to miss an opportunity, Google has claimed that its working on a system that has the ability of detecting deepfake videos. For this, they are creating deepfakes themselves, as stated in a blogpost on 24th September. They have created a large dataset of 363 real videos of consenting actors and an additional 3,068 falsified videos that will be utilized by researchers at the Technical University of Munich. Regardless, deepfakes already have made their mark and eliminated the trust factor of news circulating online. Among the most targeted sectors for deepfakes, divided by percentage include 62.7 percent by entertainment, 21.7 percent by fashion, 4.3 percent by sport, 4.1 percent by business, and by 4.0 percent politics. And, currently the most targeted countries include USA and UK (making up about 61 percent of the majority targeted), South Korea, India, and Japan. Of course, these numbers will keep increasing over time. https://www.itproportal.com/features/how-deepfakes-make-us-question-everything-in-2020/


ago20

agos20

Deepfake videos - videos that were manipulated to replace a person with someone else's likeness - are effective in influencing people to think more negatively about that person. And even relatively bad deepfakes can be very convincing, according to a study by the University of Amsterdam (UvA), NOS reports. With a deepfake video, you can for example record a video of you saying sexist statements, and then impose a celebrity's likeness over you in the video so that it looks like the celebrity said those things. Or you can take a video of the celebrity, and manipulate it to make them say things they never said. For their study, the Amsterdam researchers created a deepfake video of former CDA leader Sybrand Buma and showed it to 278 people. The group that saw the deepfake video thought more negatively about Buma afterwards than the group that watched the original video of him. The attitudes towards the CDA as a whole remained almost the same in both situations. Only 8 of the 140 people who saw the deepfake video raised doubts about its authenticity, UvA researcher Tom Dobber said to NOS. "And this one was not even perfect, you could see the lips moving crazily every now and then. It is remarkable that people fell for it completely." https://nltimes.nl/2020/08/24/deepfakes-convincing-effective-influencing-people-amsterdam-researchers-found +https://www.miragenews.com/would-you-fall-for-a-fake-video-uva-research-suggests-you-might/

jul20 ???

Agências de notícias pró-Israel publicaram editoriais com autoria de articulistas inexistentes, sob a técnica conhecida como “deepfake”, que substitui o rosto das pessoas nas imagens, em ação descrita como “nova fronteira da desinformação”. Detalhes da “falsificação hiper-realista” foram revelados pela agência Reuters nesta semana. A reportagem da agência internacional enfim desvelou o mistério sobre a identidade do escritor sionista Oliver Taylor. https://www.monitordooriente.com/20200720-editoriais-de-agencias-sionistas-sao-assinados-por-deepfakes-uma-nova-fronteira-da-desinformacao/ No início deste mês, o jornal britânico The Daily Beast denunciou 46 redes de notícias conservadoras, incluindo algumas filiadas à comunidade judaica, que utilizaram dezenove autores inexistentes para propagar “furos” sobre o Oriente Médio, como parte de uma campanha de propaganda massiva cujo início parece datar de julho de 2019.


jul20
s history records, the first lunar landing was a total success and the crew returned to Earth safely, despite a new recording showing Nixon reading the contingency words prepared for him by speechwriter William Safire on July 18, 1969. The video, released by MIT's Center for Advanced Virtuality on Monday — the 51st anniversary of the Apollo 11 moon landing — is "fake news," purposely. "Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it's become a crucial issue of our time," said D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the Center for Advanced Virtuality, part of MIT Open Learning, in a statement. http://www.collectspace.com/news/news-072020a-moon-disaster-speech-mit-deepfake.html


Jun20
Remote voting using video is best option, tech, cybersecurity experts tell PROC. 'It’s a lot easier to match a person’s face and voice, deepfakes notwithstanding, whereas when you are voting through an app, what the system is recording is not that you voted, but rather that somebody with your credentials voted,' says Aleksander Essex. https://www.hilltimes.com/2020/06/11/remote-voting-using-video-is-best-option-tech-cybersecurity-experts-tell-proc/252464

jun20

No mês passado, um grupo político na Bélgica divulgou um vídeo em profundidade do primeiro-ministro belga, fazendo um discurso que ligava o surto de Covid-19 a danos ambientais e pedia ações drásticas sobre as mudanças climáticas. Pelo menos alguns espectadores acreditavam que o discurso era real. Apenas a mera possibilidade de um vídeo ser um deepfake já pode gerar confusão e facilitar o engano político, independentemente de a tecnologia ter sido realmente usada. O exemplo mais dramático disso vem do Gabão, um pequeno país da África central. No final de 2018, o presidente do Gabão, Ali Bongo, não era visto em público havia meses. Havia rumores de que ele não era mais saudável o suficiente para o cargo ou mesmo que ele tinha morrido. Na tentativa de acalmar essas preocupações e reafirmar a liderança de Bongo sobre o país, seu governo anunciou que ele daria um discurso televisionado em todo o país no dia de Ano Novo. https://forbes.com.br/colunas/2020/06/por-que-o-mundo-nao-esta-preparado-para-os-estragos-que-as-deepfakes-podem-causar/


Mai20 Não são tão graves como se supunha?
"Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, agreed that deepfakes haven't proven as dangerous as once feared, although for different reasons. Hwang argued that users of "active measures" (efforts to sow misinformation and influence public opinion) can be much more effective with cheaper, simpler and just as devious types of fakes — mis-captioning a photo or turning it into a meme, for example. https://www.npr.org/2020/05/07/851689645/why-fake-video-audio-may-not-be-as-powerful-in-spreading-disinformation-as-feare?t=1589108903368


Mar20 Nova zelãndia VARIOS CASOS https://www.stuff.co.nz/technology/digital-living/120397261/deepfakes-new-zealand-experts-on-how-face-swap-could-turn-sinister

Mai20 Last month Sophie Wilmès, the prime minister of Belgium, appeared in an online video to tell her audience that the COVID-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Whether or not these two existential crises are connected, the fact is that Wilmès said no such thing. Produced by an organization of climate change activists, the video was actually a deepfake, or a form of fake media created using deep learning. Deepfakes are yet another way to spread misinformation – as if there wasn’t enough fake news about the pandemic already. https://journalism.design/les-deepfakes/extinction-rebellion-sempare-des-deepfakes/ https://viterbischool.usc.edu/news/2020/05/fooling-deepfake-detectors/
https://www.extinctionrebellion.be/en/tell-the-truth

No comments:

Post a Comment