Sunday, March 22, 2020

Consequências (desinformação na sociedade- politica; outras consquências [VER OUTRO ABAIXO repete] , jornalismo, manipulação)

out23

It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech

Michael Hameleers, Franziska Marquart

Abstract


Mis- and disinformation labels are increasingly weaponized and used as delegitimizing accusations targeted at mainstream media and political opponents. To better understand how such accusations can affect the credibility of real information and policy preferences, we conducted a two-wave panel experiment (Nwave2 = 788) to assess the longer-term effect of delegitimizing labels targeting an authentic video message. We find that exposure to an accusation of misinformation or disinformation lowered the perceived credibility of the video but did not affect policy preferences related to the content of the video. Furthermore, more extreme disinformation accusations were perceived as less credible than milder misinformation labels. The effects lasted over a period of three days and still occurred when there was a delay in the label attribution. These findings indicate that while mis- and disinformation labels might make authentic content less credible, they are themselves not always deemed credible and are less likely to change substantive policy preferences.

https://ijoc.org/index.php/ijoc/article/view/20777
~


jun23
Reasons To Doubt Political Deepfakes
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes
are perceived, and which potential motivations drive doubt in the believability of deepfakes
versus authentic videos. To better understand the audience’s perceptions of deepfakes, we ran an
online experiment (N=829) in which participants were randomly exposed to a politician’s textual
or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this
politician’s speech was forged to include a radical right-wing populist narrative. In response to
both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations
for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such
judgments. Key findings show that participants have a hard time distinguishing a deepfake from a
related authentic video, and that the deepfake’s content distance from reality is a more likely cause
for doubt than perceived technological glitches. Together, we offer new insights into news users’
abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy
interventions promoting accurate verification skills among the audience.
DOI: 10.1177/02673231231184703


set22
BRASIL
Nas últimas semanas, conteúdo do Jornal Nacional foi adulterado desta forma e compartilhado intensamente em redes sociais como o WhatsApp para desinformar os eleitores. Alguns dos mais compartilhados exibem áudio e vídeo adulterados para afirmar que o candidato à reeleição, Jair Bolsonaro, estaria à frente na pesquisa de intenção de voto do Ipec, o que é falso. A pesquisa mostrou o oposto do vídeo adulterado.
https://g1.globo.com/jornal-nacional/noticia/2022/09/19/deepfake-conteudo-do-jornal-nacional-e-adulterado-para-desinformar-os-eleitores.ghtml




jun22

Report on Deep Fakes and National Security

https://news.usni.org/2022/06/08/report-on-deep-fakes-and-national-security



abr22

Deepfakes and fake news pose a growing threat to democracy, experts warn

https://phys.org/news/2022-04-deepfakes-fake-news-pose-threat.html 

fev22
While troops remain poised in Russia and Belarus, staged near the Ukrainian border, those hoping to avoid a war see hopeful signs in Russia's apparent continuing openness to diplomacy. But tensions remain high, and the US warns that Russia is preparing deep fake provocations to supply a casus belli.
https://thecyberwire.com/stories/7c7c8daee6eb4bbba9a512bdec8bd680/deep-fakes-as-a-bogus-casus-belli


dez21


The actual financial scam Grothaus describes involves fraudsters who used a voice recording of a CEO to call his accountant and get him to wire them $243,00. Embarrassing – but also only possible because of a pretty gullible interlocutor. The political case study he describes is of an amateur edit of a video that made it look as if Hollywood star Dwayne Johnson was humiliating Hillary Clinton in the run-up to the 2016 election. The video went viral in Magaland, but not because its authenticity was particularly persuasive. It just fitted with people’s existing biases.
That’s the thing about “disinformation”: it’s not really geared towards changing people’s minds. It’s about feeding them what they want to consume anyway. The quality of the deception is not necessarily the crucial factor. Will deepfakes change this? Will their mere existence destroy any vestiges of trust in a shared reality? Potentially. But one thing we do know is that the discourse that has grown up around this issue, rather than being something radically new, is part of a much older dynamichttps://www.theguardian.com/books/2021/dec/16/trust-no-one-inside-the-world-of-deepfakes-by-michael-grothaus-review-disinformations-superweapon

no0v21
MALASIA
In June 2019, a grainy video proliferated throughout Malaysian social media channels that allegedly showed the country’s Economic Affairs Minister, Mohamed Azmin Ali, having sex with a younger staffer named Muhammad Haziq Abdul Aziz. Although Azmin insisted that the video was fake and part of a “nefarious plot” to derail his political career, Abdul Aziz proceeded to post a video on Facebook ‘confessing’ that he was the man in the video and calling for an investigation into Azmin. The ensuing controversy threw the country into uproar. Azmin kept his job, though, after the prime minister declared that the video was likely a deepfake—a claim several experts have since disputedhttps://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/


NOV21
NÂO SÂO TÂO PERIGOSOS COMO SE PENSAVA?
Researchers from the Massachusetts Institute of Technology (MIT) have put out a new report investigating whether political video clips might be more persuasive than their textual counterparts, and found the answer is... not really. (...) To gauge how effective this tech would be at tricking anyone, the MIT team conducted two sets of studies, involving close to 7,600 participants total from around the U.S. Across both studies, these participants were split into three different groups. In some cases, the first was asked to watch a randomly selected “politically persuasive” political ad (you can see examples of what they used here), or a popular political clip on covid-19 that was sourced from YouTube. The second group was given a transcription of those randomly selected ads and clips, and the third group was given, well, nothing at all since they were acting as the control group. The result? “Overall, we find that individuals are more likely to believe an event occurred when it is presented in video versus textual form,” the study reads. In other words, the results confirmed that, yes, seeing was believing, as far as the participants were concerned. But when the researchers dug into the numbers around persuasion, the difference between the two mediums was barely noticeable, if at all. LINK + https://www.pnas.org/content/118/47/e2114388118 + https://thenextweb.com/news/mit-research-shows-sad-reason-why-deepfakes-pose-little-threat-us-politics 

nov21

Netherlands politicians just got a first-hand lesson about the dangers of deepfake videos. According to NL Times and De Volkskrant, the Dutch parliament's foreign affairs committee was fooled into holding a video call with someone using deepfake tech to impersonate Leonid Volkov (above), Russian opposition leader Alexei Navalny's chief of staff. The perpetrator hasn't been named, but this wouldn't be the first incident. The same impostor had conversations with Latvian and Ukranian politicians, and approached political figures in Estonia, Lithuania and the UK. https://www.engadget.com/netherlands-deepfake-video-chat-navalny-212606049.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZGFya3JlYWRpbmcuY29tLw&guce_referrer_sig=AQAAAGtJPuVXkT6dt5uj-AXKfuESgeUS32XYSnhlheH4c1fP5q6sRc9GVLhsD_201gg2xW1UXAIyvwO2kV3Niwdjv2Kzl6M3Eu_LEubjvlOE_ojwlzUbiis6ugKOajJa0v1GrV0LrAWarzdHnLnCcdxtj20dh36ICUUAcyuJ3YqQ3F5_


mar21
In today’s virtually interconnected world, it is now cheaper, faster and less risky for malign foreign entities to conduct non-kinetic subversion of adversaries. This commentary aims to promote debate about whether digitisation has reshaped foreign interference or whether changes to the conduct of covert subversion operations simply mask what at its core is an unchanged and perennial fixture of geopolitics. It calls into question the concept of foreign interference in a world wherein the boundaries of foreign and domestic are beginning to dissolve in the digital theatre of battle.
https://www.tandfonline.com/doi/abs/10.1080/10357718.2021.1909534?journalCode=caji20


fev21
Estonian Intelligence: Russians will develop deepfake threats
https://www.euractiv.com/section/cybersecurity/news/estonian-intelligence-russians-will-develop-deepfake-threats/

jan21

New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.

Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense. https://digitalcontentnext.org/blog/2021/01/25/how-powerful-are-deepfakes-in-spreading-misinformation/


dez20
One of the most concerning consequences of disinformation for democracy is that it has the potential to create a crisis of legitimacy. Disinformation can reduce the legitimacy of policy outputs, election outcomes, government, democratic processes, and democracy as a belief-system through:
Tainting the preference formation phase of decision-making, potentially generating a trust deficit, or boosting an existing one, not just in government and governance processes, but also in fellow members of the polity. This may jeopardise crucial ingredients of democracy.
Stimulating widespread distrust of the veracity of information, leading to a ‘post-truth’ order where either anything goes, or correct information is disbelieved, resulting in political apathy.
Undermining political culture more broadly by corroding collective belief in democracy as an ideology https://blogs.lse.ac.uk/politicsandpolicy/digitisation-democracy/ 

?20

This growing phenomenon has been used in political scenarios to misinform the public on various debates [2]. For instance, the use of deepfake video by an Italian satirical TV show against the formal Prime Minister of Italy Matteo Renzi. The video shared on social media depicted him insulting fellow politicians. As the video spread online, many individuals started to believe the video was authentic, which led to public outrage [The Emerging Threats of Deepfake Attacks and Countermeasures Shadrack Awah Buo 10.13140/RG.2.2.23089.81762

dez20 [foi um cheapfake - Pelosi - que fez alertar para os perigos dos deepfakes?]

Despite bipartisan calls for the video to be taken down, a Facebook spokesperson confirmed that the videos will not be removed because the platform does not have policies that dictate the removal of false information [20]. Therefore, this has prompted world governments to look for ways to regulate the use of DT [7]. The Emerging Threats of Deepfake Attacks and Countermeasures; Shadrack Awah Buo 10.13140/RG.2.2.23089.81762

Nov20

There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINK


nov20

El falso audio de González Laya y Bin Laden que es viral en WhatsApp: así se hace un 'fake' de voz. https://www.elespanol.com/omicrono/tecnologia/20201112/falso-audio-gonzalez-laya-bin-laden-whatsapp/535447162_0.html

nov20

GERAL: As Jane Lytvynenko, a senior reporter at BuzzFeed News, says in Episode 2“Videos are easily taken out of context and miscaptioned, which is a very big problem for misinformation. And part of the reason why they’re a much bigger problem than something like a written article is that people very much lean towards the seeing-is-believing gut instinct.”

https://immerse.news/prepare-dont-panic-for-deepfakes-c77f9f683f30


nov20 (ressuscitar mortos)

On Thursday, shortly before Halloween and in the wake of a much derided, viral tweet exposing her immense wealth and privilege, Kim Kardashian West reacted to Kanye West’s surprise gift to her: a hologram of her dead father. In an Instagram post, Kardashian West wrote, “For my birthday, Kanye got me the most thoughtful gift of a lifetime. A special surprise from heaven. A hologram of my dad. It is so lifelike and we watched it over and over, filled with lots so tears and emotion. I can’t even describe what this meant to me and my sisters, my brother, my mom and closest friends to experience together. Thank you so much Kanye for this memory that will last a lifetime.” The Robert Kardashian hologram, which also relied on deepfake technology,  delivered a special 40th birthday message to his daughter and also repeatedly dubbed Kanye West a genius. https://slate.com/technology/2020/11/robert-kardashian-joaquin-oliver-deepfakes-death.html


OUT20 (avisar antes de acontecer...)

The ruling Georgian Dream party says that the ‘radical opposition’ which has ‘zero chance of winning the parliamentary elections’ has plans to release deepfakes on election day, on Saturday. Kobakhidze says that the videos, which may depict members of the ruling party, including party founder Bidzina Ivanishvili, will aim to mislead voters. https://agenda.ge/en/news/2020/3319 


out20/PERIGOS:


Deepfakes have 'already started WW3' online in dangerous 'corrosion of reality'. The nature of warfare is changing rapidly, and most of us aren't ready for the chaos and doubt created for when artificial intelligence (AI) can manipulate video to make any politician say anything https://www.dailystar.co.uk/news/latest-news/deepfakes-already-started-ww3-online-22858256

Deepfake a clear and present danger to democracy https://www.timeslive.co.za/sunday-times/opinion-and-analysis/2020-10-18-deepfake-a-clear-and-present-danger-to-democracy/

ou20

Deepfakes can help authoritarian ideas flourish even within a democracy, enable authoritarian leaders to thrive, and be used to justify oppression and disenfranchisement of citizens, Ashish Jaiman, director of technology and operations at Microsoft said on Thursday. “Authoritarian regimes can also use deepfakes to increase populism and consolidating power”, and they can can also be very effective to nation states to sow the seeds of polarisation, amplifying division in the society, and suppressing dissent, Jaiman added, while speaking at ORF-organised cybersecurity conference CyFy. Jaiman also pointed out that deepfakes can be used to make pornographic videos, and the target of such efforts will “exclusively be women”.  How deepfakes can affect democracies https://www.medianama.com/2020/10/223-deepfakes-impact-democracies/


out20

China is likely to rely on artificial intelligence-generated disinformation content, such as deep fake and deep voice videos, as part of its psychological and public opinion warfare across the world, a new study by United States-based think tank Atlantic Council says. The Atlantic Council's Digital Forensics Lab (DFRLab) has published a new study analysing Chinese disinformation campaigns and recent trends which suggest that despite high success among the domestic audience base, the Chinese Communist Party (CCP) struggles to drive its message home on the foreign front. https://www.indiatoday.in/world/story/ai-driven-deep-fakes-next-big-tool-chinese-disinformation-campaign-study-1730903-2020-10-12 


out20 reescrever a história

 this new technology doesn’t just threaten our present discourse. Soon, AI-generated synthetic media may reach into the past and sow doubt into the authenticity of historical events, potentially destroying the credibility of records left behind in our present digital era

https://www.fastcompany.com/90549441/how-to-prevent-deepfakes


The existence of deepfake videos means that sometimes real videos are taken as fakes. In January 2019, in the African nation of Gabon, a video of President Ali Bongo, who had not made a public appearance for several months, triggered a coup. The military believed the video was a fake, although the president later confirmed it was real.” https://www.scmp.com/comment/opinion/article/3103331/us-elections-violence-india-threat-deepfakes-only-growing Ou seja, já não basta ser verdadeiro para se acreditar; há 10 anos isto seria impossível, hoje duvidamos do que vemos e pomos em causa mesmo aquilo que é verdade; com que consquências para a nossa vida em sociedade?


set20

This , simply reaching the polls to cast a vote is complicated. Foreign agents, bots, inaccurate tweets and White House attacks on the validity of elections can confuse voters. Cyberattacks can reach voters by email and phone, sending misleading information about polling places or mail-in deadlines, creating long lines at polling locations or shutting down polls in targeted communities. The risk of COVID-19 deters people from going to the polls, as the Spanish Flu did in elections a century ago. But what makes this year's election truly unique is the widespread use of mail-in ballots. "Today, forces are at work to make people not participate in the election by questioning the integrity of elections and saying the system is broken," said Christina Bellantoni, professor at the USC Annenberg School for Communication and Journalism and director of the Annenberg Media Center. "With so few people undecided about the upcoming presidential election, influencing just a handful of people on the margins can sway an election." https://phys.org/news/2020-09-deepfakes-fake-news-array-aim.html

ago20

A reduction of trust in news circulating online. In addition to changing our perceptions, tarnishing the reputation of celebrities, targeting victims for blackmailing, and influencing voting behavior, deepfakes are now eliminating the trust in news circulating online. Since deepfake technology is improving faster than many believed, the “realness” of such fake content is becoming more and more convincing.

Many deepfake videos can not only manipulate facial expression, but they can also now perform a myriad of movements, including head rotation, eye gaze and blinking, and full 3D head positions using generative neural networks. Never one to miss an opportunity, Google has claimed that its working on a system that has the ability of detecting deepfake videos. For this, they are creating deepfakes themselves, as stated in a blogpost on 24th September. They have created a large dataset of 363 real videos of consenting actors and an additional 3,068 falsified videos that will be utilized by researchers at the Technical University of Munich. Regardless, deepfakes already have made their mark and eliminated the trust factor of news circulating online. Among the most targeted sectors for deepfakes, divided by percentage include 62.7 percent by entertainment, 21.7 percent by fashion, 4.3 percent by sport, 4.1 percent by business, and by 4.0 percent politics. And, currently the most targeted countries include USA and UK (making up about 61 percent of the majority targeted), South Korea, India, and Japan. Of course, these numbers will keep increasing over time. https://www.itproportal.com/features/how-deepfakes-make-us-question-everything-in-2020/


ago20

agos20

Deepfake videos - videos that were manipulated to replace a person with someone else's likeness - are effective in influencing people to think more negatively about that person. And even relatively bad deepfakes can be very convincing, according to a study by the University of Amsterdam (UvA), NOS reports. With a deepfake video, you can for example record a video of you saying sexist statements, and then impose a celebrity's likeness over you in the video so that it looks like the celebrity said those things. Or you can take a video of the celebrity, and manipulate it to make them say things they never said. For their study, the Amsterdam researchers created a deepfake video of former CDA leader Sybrand Buma and showed it to 278 people. The group that saw the deepfake video thought more negatively about Buma afterwards than the group that watched the original video of him. The attitudes towards the CDA as a whole remained almost the same in both situations. Only 8 of the 140 people who saw the deepfake video raised doubts about its authenticity, UvA researcher Tom Dobber said to NOS. "And this one was not even perfect, you could see the lips moving crazily every now and then. It is remarkable that people fell for it completely." https://nltimes.nl/2020/08/24/deepfakes-convincing-effective-influencing-people-amsterdam-researchers-found +https://www.miragenews.com/would-you-fall-for-a-fake-video-uva-research-suggests-you-might/

jul20 ???

Agências de notícias pró-Israel publicaram editoriais com autoria de articulistas inexistentes, sob a técnica conhecida como “deepfake”, que substitui o rosto das pessoas nas imagens, em ação descrita como “nova fronteira da desinformação”. Detalhes da “falsificação hiper-realista” foram revelados pela agência Reuters nesta semana. A reportagem da agência internacional enfim desvelou o mistério sobre a identidade do escritor sionista Oliver Taylor. https://www.monitordooriente.com/20200720-editoriais-de-agencias-sionistas-sao-assinados-por-deepfakes-uma-nova-fronteira-da-desinformacao/ No início deste mês, o jornal britânico The Daily Beast denunciou 46 redes de notícias conservadoras, incluindo algumas filiadas à comunidade judaica, que utilizaram dezenove autores inexistentes para propagar “furos” sobre o Oriente Médio, como parte de uma campanha de propaganda massiva cujo início parece datar de julho de 2019.


jul20
s history records, the first lunar landing was a total success and the crew returned to Earth safely, despite a new recording showing Nixon reading the contingency words prepared for him by speechwriter William Safire on July 18, 1969. The video, released by MIT's Center for Advanced Virtuality on Monday — the 51st anniversary of the Apollo 11 moon landing — is "fake news," purposely. "Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it's become a crucial issue of our time," said D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the Center for Advanced Virtuality, part of MIT Open Learning, in a statement. http://www.collectspace.com/news/news-072020a-moon-disaster-speech-mit-deepfake.html


Jun20
Remote voting using video is best option, tech, cybersecurity experts tell PROC. 'It’s a lot easier to match a person’s face and voice, deepfakes notwithstanding, whereas when you are voting through an app, what the system is recording is not that you voted, but rather that somebody with your credentials voted,' says Aleksander Essex. https://www.hilltimes.com/2020/06/11/remote-voting-using-video-is-best-option-tech-cybersecurity-experts-tell-proc/252464

jun20

No mês passado, um grupo político na Bélgica divulgou um vídeo em profundidade do primeiro-ministro belga, fazendo um discurso que ligava o surto de Covid-19 a danos ambientais e pedia ações drásticas sobre as mudanças climáticas. Pelo menos alguns espectadores acreditavam que o discurso era real. Apenas a mera possibilidade de um vídeo ser um deepfake já pode gerar confusão e facilitar o engano político, independentemente de a tecnologia ter sido realmente usada. O exemplo mais dramático disso vem do Gabão, um pequeno país da África central. No final de 2018, o presidente do Gabão, Ali Bongo, não era visto em público havia meses. Havia rumores de que ele não era mais saudável o suficiente para o cargo ou mesmo que ele tinha morrido. Na tentativa de acalmar essas preocupações e reafirmar a liderança de Bongo sobre o país, seu governo anunciou que ele daria um discurso televisionado em todo o país no dia de Ano Novo. https://forbes.com.br/colunas/2020/06/por-que-o-mundo-nao-esta-preparado-para-os-estragos-que-as-deepfakes-podem-causar/


Mai20 Não são tão graves como se supunha?
"Tim Hwang, director of the Harvard-MIT Ethics and Governance of AI Initiative, agreed that deepfakes haven't proven as dangerous as once feared, although for different reasons. Hwang argued that users of "active measures" (efforts to sow misinformation and influence public opinion) can be much more effective with cheaper, simpler and just as devious types of fakes — mis-captioning a photo or turning it into a meme, for example. https://www.npr.org/2020/05/07/851689645/why-fake-video-audio-may-not-be-as-powerful-in-spreading-disinformation-as-feare?t=1589108903368


Mar20 Nova zelãndia VARIOS CASOS https://www.stuff.co.nz/technology/digital-living/120397261/deepfakes-new-zealand-experts-on-how-face-swap-could-turn-sinister

Mai20 Last month Sophie Wilmès, the prime minister of Belgium, appeared in an online video to tell her audience that the COVID-19 pandemic was linked to the “exploitation and destruction by humans of our natural environment.” Whether or not these two existential crises are connected, the fact is that Wilmès said no such thing. Produced by an organization of climate change activists, the video was actually a deepfake, or a form of fake media created using deep learning. Deepfakes are yet another way to spread misinformation – as if there wasn’t enough fake news about the pandemic already. https://journalism.design/les-deepfakes/extinction-rebellion-sempare-des-deepfakes/ https://viterbischool.usc.edu/news/2020/05/fooling-deepfake-detectors/
https://www.extinctionrebellion.be/en/tell-the-truth

Friday, March 20, 2020

Origem, definições, conceito, cheapfakes, evolução

novwe23

“There’s a video of Gal Gadot having sex with her stepbrother on the internet.” With that sentence, written by the journalist Samantha Cole for the tech site Motherboard in December, 2017, a queasy new chapter in our cultural history opened. A programmer calling himself “deepfakes” told Cole that he’d used artificial intelligence to insert Gadot’s face into a pornographic video. And he’d made others: clips altered to feature Aubrey Plaza, Scarlett Johansson, Maisie Williams, and Taylor Swift.

This is the smirking milieu from which deepfakes emerged. The Gadot clip that the journalist Samantha Cole wrote about was posted to a Reddit forum, r/dopplebangher, dedicated to Photoshopping celebrities’ faces onto naked women’s bodies. This is still, Cole observes, what deepfake technology is overwhelmingly used for. Able to depict anything imaginable, people just want to see famous women having sex. A review of nearly fifteen thousand deepfake videos online revealed that ninety-six per cent were pornographic. These clips are made without the consent of the celebrities whose faces appear or the performers whose bodies do. Yet getting them removed is impossible, because, as Scarlett Johansson has explained, “the Internet is a vast wormhole of darkness that eats itself.”

https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review


jul23 (ABRIL210)

A frase do Reddit não cai do ceu...

The origin story of deepfakes goes all the way back to the year 1997. It was the Video Rewrite program by Bregler et al. that first published a study about it as a result of a conference on computer graphics and interactive techniques. The experts explained how to modify existing video footage of a person speaking with a different audio track.

It wasn’t a new concept (photo manipulation already happened back in the 19th century), but the seemingly insignificant study on altering video content was the first of its kind that fully automated the process of facial re-animation. The help of machine learning algorithms was used to achieve this. It was a huge milestone and quite possibly the real starting point of deepfake video development around the globe.

However, it wasn’t until years later that the concept of deepfakes really started to catch on. Just like many new technologies, the mass-adoption comes quite a bit of time later after the invention. First came the Face2Face program in 2016, in which Thies et al. showcased how real-time face capture technology could be used to re-enact videos in a realistic way.

It wasn’t until July 2017 that more people started to be interested in the applications of deep learning and media. We were introduced to a highly realistic deepfake video featuring former US President Barack Obama. The study by Suwajanahorn et al. showed for the first time how audio could be lip synced in a frighteningly realistic way on politicians. A dangerous precedent that could potentially transform the course of the political future.

After that, things went very quickly. The adoption and introduction of the technology by the masses quite literally exploded, as more and more deepfake videos and similar applications of the machine learning technique started to be implemented around the globe. The Obama deepfake was a popular one, and a year later it was re-iterated in the now infamous BuzzFeed YouTube video titled “You Won’t Believe What Obama Says In This Video!”:

https://deepfakenow.com/history-deepfake-technology-how-deepfakes-started/


nov22

Synthetic video: don’t call them deepfakes [face à carga negativa que a palavra deepfakes encerra, não é indiferente chamart-lhe ou não assim. se lhe chamar deepfake conseguirei ter um sitgnificante positivo ou ficará de imediato a desconfiança?] Este artigo aborda essa confusão e os equivocos criados.


Although there is no universally agreed-upon definition, a typical deepfake uses AI to replace a person in an existing video with another. The vast majority of deepfakes are used to switch pornographic actors with celebrity women, but they have attracted popular attention as tools of political disinformation. In March, a poor-quality deepfake of President Volodymyr Zelensky announcing Ukraine’s surrender to Russia surfaced on social media for a brief round of ridicule, before being removed. (...) Deepfakes – and perhaps, by extension, synthetic video – have a public image problem. Synthetic video companies are wary of association with abusive deepfakes, prominently laying out their ethical frameworks to make clear their distance. ... Riparbelli, meanwhile, seems more relaxed about the association. “We don’t call ourselves deepfakes, but it’s not like: ‘oh no we’re definitely not deepfakes.’ We’re definitely building technology that you could put in the deepfake family of technologies, I guess,” he says. “I wouldn’t say I spend a lot of time trying to escape the deepfake narrative. I welcome it; people find it interesting.” He believes that companies like Synthesia are already providing evidence that synthetic video is a general-purpose technology with many potential applications beyond abusive deepfakes: “The popular narrative is very focused on deepfakes, which makes a lot of sense. It certainly is a real threat. It’s causing real harm today. But I think it often becomes a red herring for a much more fundamental shift we’re going through, which is to switch from traditional software to AI ... and I think it misses the myriad of applications of what you could very well call deepfake technology that we’re all using every single day.”
https://eandt.theiet.org/content/articles/2022/11/synthetic-video-don-t-call-them-deepfakes/

jul22

Researchers Found A Way To Animate High Resolution Deepfakes From A Single Photo, And It's Unsettling

https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/

https://digg.com/video/high-resolution-deepfakes-from-a-single-photo-and-its-unsettling



out21
FAZ SENTIDO?
War of the Worlds – The Original 'Deep Fake'. Marking the anniversary of Orson Welles' realistic radio dramatization of a Martian invasion of Earth, the original 'deepfake' is revisited amid our present-day social media and political atmosphere with original audio and interviews with Orson Welles, his collaborator John Houseman, writer Howard Koch, and A. Brad Schwartz historian and author of BROADCAST HYSTERIA: Orson Welles's War of the Worlds and the Art of Fake News, among others.
https://finance.yahoo.com/news/interrupt-broadcast-host-bill-kurtis-165700861.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE4xP9kO2tigeGtjBkv14JWX_w8I-KMzGpdUd4vlBy_9tTLrRrWzqYOUUsMWYjYGVfqSByoBV9J4aNOP2ZLxwQglNVW1kVlIgni-TQMHUgMi6DUKm0XSaZg4VM3xUnwFHCA2MrW2fB1wrWBrnfIn7kv0HfB47uL1CdUv5AU35JUg

out21
ALARGAR O CONCEITO

1)     A growing problem of 'deepfake geography': How AI falsifies satellite images LINK



set21
COMO FUNCIONA

Deepfakes are often powered by an innovation in deep learning known as a “generative adversarial network,” or GAN. GANs deploy two competing neural networks in a kind of game that relentlessly drives the system to produce ever higher quality simulated media. For example, a GAN designed to produce fake photographs would include two integrated deep neural networks. The first network, called the “generator,” produces fabricated images. The second network, which is trained on a dataset consisting of real photographs, is called the “discriminator.” This technique produces astonishingly impressive fabricated images. Search the internet for “GAN fake faces” and you’ll find numerous examples of high-resolution images that portray nonexistent individuals. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440


JU2
ORIGEM /HISTORIA
The specific AI advances that made deepfakes possible occurred around 2012, in
an AI technique called “deep learning.” Deep learning9 radically improved AI’s ability
to perceive things such as images, audio or video (for accessible discussions see
(Wright, 2019b)). Deep learning AI is now very good at perceiving images or sounds
– and essentially turning those computer programs back-to-front instead generates
images or sounds. This makes convincing “deepfakes”, which emerged around 2018
to make fake pornography.
(no arquivo)Cognitive defense of the Joint Force in a digitizing world

jul21
origem da palavra
https://www.wsj.com/articles/deepfake-a-piece-of-thieves-slang-gets-a-digital-twist-11626983869

JUN221
TEXTO
O Facebook revelou um novo projeto que, aproveitando a tecnologia de IA da empresa, é capaz de imitar a escrita de uma pessoa bastando ter apenas uma palavra como base de estudo. Este sistema analisa a escrita do utilizador, e é capaz de converter qualquer outro texto que se pretenda para uma grafia similar.

Apelidado de TextStyleBrush, este novo sistema é capaz de reconhecer por IA o estilo de escrita de cada utilizador, bastante ter uma palavra como estudo. Feito isso, o sistema é capaz de replicar qualquer texto que se pretenda dentro dessa grafia, criando resultados verdadeiramente surpreendentes.

  Basicamente, podemos considerar esta tecnologia como um “deepfake” para escrita – o que pode ser visto como algo bom ou mau. O sistema não necessita de ficar limitado apenas a um estilo de escrita, é capaz também de reproduzir fontes.

Por exemplo, se possui um texto numa determinada fonte que pretenda aplicar noutro conteúdo, o TextStyleBrush é capaz de recriar a mesma – até se esta fonte for algo digital, e não propriamente uma escrita manual.

https://tugatech.com.pt/t39571-facebook-utiliza-ia-para-projeto-capaz-de-criar-deepfakes-de-qualquer-texto

Maio21
VER
https://www.tvovermind.com/its-harry-potter-but-with-american-actors-deepfake-video/


maio21

The Girlfriend Experience Season 3, Episode 3 recap: Deep Fake
https://showsnob.com/2021/05/09/the-girlfriend-experience-season-3-episode-3-recap-deep-fake/

Fev21

É OU NAO UM DF? Hour One’s promise to customers is that, after a relatively quick onboarding process, the company’s artificial intelligence can create a fully digital version of you, with the ability to say and do whatever you want it to. The company has partnered with YouTuber Taryn Southern to show off the tech’s capabilities. The biggest takeaway from this inside look is that Hour One’s clones are not deepfakes. A deepfake is created by manipulating an image to fabricate the likeness of a person, usually on top of existing video footage. Hour One’s clones, in comparison, require studio time to capture a person’s appearance and voice... and, therefore, consent. Southern says she stood in front of a green screen for about seven minutes, read a few scripts, and sang a song. This difference is noteworthy in that this capture process allows for a much fuller “cloning” process. Hour One can now feed just about any script into its program and create a video where it appears that Southern is actually reading it. There’s also an extra layer of consent involved — deepfakes are often made without the subject’s approval, but that’s not possible with Hour One’s technology. https://www.inputmag.com/tech/were-begging-you-to-not-turn-yourself-into-ai-powered-clone

MUITO BOM
https://news.yahoo.com/celebrity-deepfakes-shockingly-realistic-204210734.html

fev21

PRE-DEEPFAKES New Hampshire faced a variation this issue 16 years ago in a story I covered extensively. It involved a summer camp for girls, whose official photographer used Photoshop to put faces of teenage campers younger than 16 onto the bodies of adult women in pornography pictures. These “morphed” pictures, which he called “personal fantasies,” were discovered by accident when he didn’t remove them from CD-ROMs of official camp photos. He was arrested and convicted of possessing child pornography.

https://granitegeek.concordmonitor.com/2021/02/14/n-h-wrestled-with-deepfake-pornography-long-before-the-tech-existed/


dez20

In 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user by the name of “deepfakes” was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities’ faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could “strip” clothes off any woman photographed.

https://www.technologyreview.com/2020/12/24/1015380/best-ai-deepfakes-of-2020/



sobre os Cheapfakes em 2020

https://www.technologyreview.com/2020/12/22/1015442/cheapfakes-more-political-damage-2020-election-than-deepfakes/


será correto usar a designação deepfake video, deepfake audio, etc, atendendo aos varios tipos de deepfakes existentes

tambem podemos usar synthetic media

nov20

The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort.https://www.biometricupdate.com/202011/deepfakes-the-times-wows-the-senate-punts-and-asia-worries + https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html



nov20

UMA BOA EXPLICAÇÃO

Essentially, artificial intelligence is a form of technology that makes computers behave in ways that could be considered distinctly human, such as being able to reason and adapt. One common kind of artificial intelligence is machine learning, which focuses on using algorithms that can improve their performance through exposure to more data. Deepfakes are created using a theory of machine learning called deep learning. A deep learning program uses many layers of algorithms to create structures called neural networks, inspired by the structure of the human brain. The neural networks aim to recognize underlying relationships in a set of data. https://www.mironline.ca/what-the-rise-of-deepfakes-means-for-the-future-of-internet-policies/


out20

 a voice cannot be copyrighted. Midler v. Ford Motor Co. proclaimed that ” A voice is as distinctive and personal as a face. The human voice is one of the most palpable ways identity is manifested.” The court held that not every instance of commercial usage of another’s voice is a violation of law; specifically, a person whose voice is recognizable and widely known receives protection under the law through their right of publicity as protection from invasion of privacy by appropriation. This protects public figures and celebrities from their identities being misappropriated and potentially misused from commercial gain. The right of publicity is the celebrity’s analog for the common person’s right of privacy; both are protected with different underlying motivations—one is to allow a celebrity alone to capitalize on their fame, and another is to allow a private person to remain private. https://www.ipwatchdog.com/2020/10/14/voices-copyrighting-deepfakes/id=126232/

out20

After deepfakes, a new frontier of AI trickery: fake facesAlready, fake faces have been identified in bot campaigns in China and Russia, as well as in right-wing online media and supposedly legitimate businesses. Their proliferation has raised fears that technology poses a more pervasive and pressing threat than deepfakes, as online platforms face a growing wave of disinformation ahead of the US election. Graphika and the Atlantic Council’s Digital Forensic Research Lab report on fake identities, showing telltale signs that Alfonzo Macias’ profile picture is a fake. “A year ago, it was a novelty,” tweeted Ben Nimmo, director of investigations of the intelligence group on social networks Graphika. “Now it seems like every trade we analyze tries this at least once.” https://www.usa-vision.com/after-deepfakes-a-new-frontier-of-ai-trickery-fake-faces/

A website called This Person Does Not Exist publishes a near-infinite number of the fake faces online, created in real time every time you refresh the page. “A year ago, this was a novelty,” says Ben Nimmo, director of investigations at social media intelligence group Graphika. “Now it feels like every operation we analyse tries this at least once.” With GAN technology available across the web, it’s impossible to be certain that the stranger you’re talking to on Facebook or Twitter has ever existed. https://www.dailystar.co.uk/news/latest-news/ai-deepfakes-creating-fake-humans-22840694

Na sexta-feira, o canal americano NBC divulgou uma investigação que destacava as muitas perguntas sobre a identidade e as fontes do documento de 64 páginas. Por exemplo, a emissora descobriu que a pessoa que era apresentada como seu autor, um suposto analista suíço chamado Martin Aspen, não existia, que sua identidade foi inventada e sua foto produzida por softwareshttps://br.noticias.yahoo.com/magnata-hong-kong-n%C3%A3o-tem-114645502.html


============================AULA TCM202/21_________________________

ORIGEM

Farid aponta, por outro lado, que as manipulações digitais de imagens não são uma novidade, pelo contrário, existem desde os anos 1990 e surgem junto com a evolução dos softwares gráficos. O que começou na forma de alterações visuais de imperfeições ou até mudanças diretas no discurso, entretanto, evoluiu para algo muito mais perigoso, enquanto a relação das pessoas com as imagens se manteve inalterada. Na visão do especialista, os indivíduos sempre estiveram cientes de que imagens podem ser manipuladas e que, ao saírem do mundo, passando através de uma câmera e, depois, por softwares como o Photoshop, elas ainda são, em maior ou menor grau, uma reprodução da realidade. O problema, segundo ele, começou quando a inteligência artificial evoluiu ao ponto de criar farsas completas, que vêm sendo utilizadas para fins nada positivos. https://canaltech.com.br/inteligencia-artificial/sociedade-pode-estar-perdendo-a-guerra-contra-os-deep-fakes-alerta-professor-172775/


Pre-deepfakes 

(2014) 

https://www.youtube.com/watch?v=lc9t1jNmtWc

“How Dove Brought Audrey Hepburn Back to Life” (LINK)

HISTORIA set20

New “Deepfake” Can Change “Entire Person” https://communalnews.com/new-deepfake-can-change-entire-person/

set20

Deepfakes Are Bad - Deepfakes Are Good. 
https://www.forbes.com/sites/glenngow/2020/09/11/deepfakes-are-baddeepfakes-are-good/#7cc365704ba0

ago20

In the past, the audiovisual has also suffered the urge to manipulate reality. Hiding certain images, showing some and not others or accompanying a scene of a certain speech with a voiceover has been common both in democracies and dictatorshipsAnd before cinema or television became popular, the static image also played an important role. Hence it has also happened by the hands of editing and manipulation to hide and manipulate reality to the liking of one or the other. Let’s look at three examples. https://www.explica.co/the-deepfakes-of-the-20th-century-how-stalin-franco-and-walt-disney-manipulated-reality-using-primitive-methods/

jul20

nother form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake textLast month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us? (...) Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check ... But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. (...) Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out. In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/ + Forget deepfakes – we should be very worried about AI-generated text. GPT-3 has been trained using millions of pages of text drawn from the Internet and can produce highly credible human writing https://www.telegraph.co.uk/technology/2020/08/26/forget-deepfakes-ai-generated-text-should-worried/


jul20

SERÁ??? As Deepfakes Get Better, The Onus Is On Us To Determine Trustworthiness https://www.mediapost.com/publications/article/354286/as-deepfakes-get-better-the-onus-is-on-us-to-dete.html



Jun20

Émais um episódio na guerra aberta entre o presidente norte-americano e as redes sociais. Esta sexta-feira, o Facebook e o Twitter viram-se obrigados a retirar das suas plataformas um vídeo publicado por Donald Trump, considerado enganoso, de acordo com a CNN. A publicação foi feita na véspera do dia em que se celebrava o Juneteenth, o feriado mais antigo conhecido em homenagem ao fim da escravidão nos EUA. https://www.dn.pt/mundo/abraco-ou-fuga-trump-manipulou-video-de-criancas-e-redes-sociais-apagaram-no-12334737.html


jun20
Deepfakes aren’t very good—nor are the tools to detect them https://arstechnica.com/information-technology/2020/06/deepfakes-arent-very-good-nor-are-the-tools-to-detect-them/?comments=1

maio20
Eco-anxiety, deepfake and cancel culture: The new words added to the Macquarie Dictionary - and what they actually mean https://www.dailymail.co.uk/news/article-8367821/Strange-new-words-added-Macquarie-Dictionary-actually-mean.html


maio20
???? o francês de 23 anos protagonizou num curto período de tempo dois feitos dignos de registo: vendeu-se a si próprio através de uma oferta inicial de moedas digitais (ICO na sigla em inglês) e agora está à venda como modelo deepfake. https://visao.sapo.pt/exameinformatica/noticias-ei/insolitos/2020-05-20-alex-masmej-criptomoeda-deepfake/

mai20
The Washington Post columnist and novelist David Ignatius, who broke the story on Lt. Gen. Michael Flynn’s phone conversation with former Russia Ambassador Sergey Kislyak, discussed his new spy thriller “The Paladin,” whose main character’s life is turned upside down by a ‘deep fake’ media campaign, which interestingly resembles circumstances surrounding what happened to Flynn.https://saraacarter.com/the-washington-post-writer-behind-flynn-leaks-warns-in-new-spy-thriller-about-deep-fakes/


Abr20 The first generation (Deepfakes 1.0) was largely used for entertainment purposes. Videos were modified or made from scratch in the pornography industry and to create spoofs of politicians and celebrities. The next generation (Deepfakes 2.0) is far more convincing and readily available. Deepfakes 2.0 are poised to have profound impacts. According to some technologists and lawyers who specialize in this area, deepfakes pose “an extraordinary threat to the sound functioning of government, foundations of commerce and social fabric.” https://www.justsecurity.org/69677/deepfakes-2-0-the-new-era-of-truth-decay/


Although the first deepfakes were created about five years ago, the term was not coined until 2017 in the community Reddit, becoming popular gradually since then. With the advancement and progression especially in the field of artificial intelligence, it is not until this year where for the first time, there is a scale important in the dissemination of videos with images that, at first glance, we may not be able to discern their authenticity. LINK
JAN20: When a fake porn video purporting to depict Gal Gadot having sex with her stepbrother surfaced online in December 2017, the reaction was swift and immediate. Vice—the outlet that first reported on the video—was quick to highlight the way that face-swap technology could be used to manufacture a wholly new form of “revenge porn,” one in which victims could find themselves featured in explicit sexual media without ever taking off their clothes in front of a camera. LINK
November 19: deepfake é acrescentado ao dicionário Collins LINK

MAR20Cuando vimos por primera vez a Gal Gadot en un vídeo porno en 2017 la reacción popular fue de asombro seguido por la inquietud. No era ella en realidad, sino un montaje con su rostro en el cuerpo de una actriz porno, pero el resultado era muy creíble. La tecnología deepfake había entrado por la puerta grande y de la forma más llamativa posible. https://www.xataka.com/robotica-e-ia/alla-porno-espanoles-que-usan-deepfakes-para-satira-politica

O que são
A deepfake is altered video content that shows something that didn't actually happen. By definition, deepfakes are produced using deep learning, which is an AI-based technology. Of late, the term deepfake has been used to depict nearly any type of edited video online – from Nancy Pelosi’s slowed speech to a mash-up of Steve Buscemi and Jennifer Lawrence. Given the technical definition, however, the video of Nancy Pelosi does not actually classify as a deepfake but rather simply an altered video, sometimes referred to as “shallow fake.” Although technically different, shallow fakes can cause the same level of potential damage as deepfakes -- the number one risk: disinformation. (LINK)

A deepfake is a video, photo, or audio recording that seems real but has been manipulated with artificial intelligence technologies. https://www.gao.gov/products/gao-20-379sp

AUDIO apenas https://www.predictiveanalyticsworld.com/machinelearningtimes/with-questionable-copyright-claim-jay-z-orders-deepfake-audio-parodies-off-youtube/11421/ +
https://www.youtube.com/watch?v=zBUDyntqcUY&list=PURt-fquxnij9wDnFJnpPS2Q

jun20 
Más de cuatro décadas después de su muerte, Franco ha vuelto a hablar. Así ha sido, al menos, en el podcast de Spotify XRey, que repasa la vida del monarca Juan Carlos I. Haciendo uso de inteligencia artificial, la voz del dictador aparece en él recitando fragmentos que nunca fueron registrados en vida. https://hipertextual.com/2020/06/spotify-franco-deepfake-voz-podcast

jn20
These amazing audio deepfakes showcase progress of A.I. speech synthesis. https://www.digitaltrends.com/news/best-audio-deepfakes-web/

set20
He says the deep fake audio clip was convincing enough to trick people closest to him, including his wife. https://www.wbur.org/hereandnow/2020/09/28/deep-fake-video-audio

Cheapfakes
Researchers say the Pelosi video is an example of a “cheapfake” video, one that has been altered but not with sophisticated AI like in a deepfake. Cheapfakes are much easier to create and are more prevalent than deepfakes, which have yet to really take off, said Samuel Woolley, director of propoganda research at the Center for Media Engagement at University of Texas. LINK
FEV20 Video alterado, sem AI, não é deepfake On Thursday, Bloomberg’s 2020 presidential campaign posted a video to Twitter that was edited to make it appear as though there was a long, embarrassing silence from Bloomberg’s Democratic opponents after he mentioned that he was the only candidate to have ever started a business during Wednesday night’s debate. Candidates like Sens. Bernie Sanders (I-VT), Elizabeth Warren (D-MA), and former South Bend, Indiana mayor Pete Buttigieg are shown searching for the words to respond to Bloomberg’s challenge. Twitter told The Verge that the video would likely be labeled as manipulated media under the platform’s new deepfakes policy that officially goes into place on March 5th.However, Facebook spokesperson Andy Stone confirmed on Twitter that the same video would not violate the platform’s deepfakes rules if it were posted to Facebook or Instagram. Facebook’s policy “does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words” likely not affecting videos like Bloomberg’s. A video must also be created with an artificial intelligence or machine learning algorithm to trigger a removal. LINK
Mar20 Twitter Twitter uses deepfake alert for the first time on video shared by TrumpThe video showed part of a speech by Democratic candidate Joe Biden but had been edited to mislead viewers about what he said https://www.telegraph.co.uk/technology/2020/03/09/twitter-uses-deepfake-alert-first-time-video-shared-us-president/ RISCOS: https://theblast.com/119699/how-soon-until-manipulated-media-becomes-deepfake-video REFLEXOES: https://news.umich.edu/cheap-fake-video-making-the-rounds-today-likely-wont-be-the-last/
Ainda CHEAPFAKES: A video shared by President Donald Trump that was edited to make it appear presidential candidate Joe Biden was endorsing his re-election during a campaign rally Saturday was deemed manipulated content by Twitter—a first for the social media company. But Facebook did nothing to flag the video as false content. Biden had stumbled over some words and the video stopped short of including his correction. The video is the latest cheap fake to raise controversy in recent weeks.University of Michigan School of Information Professor Clifford Lampe explains cheap fakes and the difficulty in getting the platforms to police them. LINK
Mar20 even if deepfakes never proliferate in the public domain, the world has nevertheless been upended by 'cheapfakes' - a term that refers to more rudimentary image manipulation methods such as photoshopping, rebroadcasting, speeding and slowing video, and other relatively unsophisticated techniques. Cheapfakes have already been the main tool in the proliferation of disinformation and online fraud, which have had significant impacts on businesses and society. https://www.weforum.org/agenda/2020/03/how-to-make-better-decisions-in-the-deepfake-era/
Conceito de deepfakes: obrigatoriamente vídeo ou não?
·         JAN20Unos investigadores del International Institute of Information Technology en Hyderabad, India, han desarrollado un sistema de inteligencia artificial capaz de crear vídeos deepfakes traducidos a diferentes idiomas. No hablamos solo de "audio", es decir, de que el sujeto primero hable inglés y luego hable en español, sino que el software usa inteligencia artificial para emular el movimiento de los labios para arrojar un resultado más realista. LINK
25/11/2019 O CONCEITO evolução do conceito: What are deepfakes? Misinformation videos becoming more ‘powerful, precise’ https://globalnews.ca/news/5382150/deepfakes-shallow-fakes-misinformation/
One troubling issue with deepfakes is simply determining what is a deepfake and what is just edited video. In many cases deepfakes are built by utilizing the latest technology to edit or manipulate video. News outlets regularly edit interviews, press conferences and other events when crafting news stories, as way to highlight certain elements and get juicy sound bites. LINK

Mar20 “deepfakes” — the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. LINK
MAR20 AUDIO audio deepfakes are on the rise as well. Although still somewhat detectable, the technology continues to improve and get better. What’s scarier is this technology is looking more disruptive in the context of intellectual property and privacy law than you may think.If you think I am being alarmist, think again. According to Siwei Lyu, director of SUNY Albany’s machine learning lab, as quoted in Axios, “having a voice [that mimics] an individual and can speak any words we want it to speak” will be a reality in a couple of years. Realistic audio deepfakes are not something on the horizon — they are on the doorstep. In this political season it is easy to see how such deepfakes may be used. For example, it’s not hard to imagine deep fake audio of Bernie Sanders’ voice designed to erode his primary chances, or audio attributed to President Trump that has been pieced together from his numerous interviews and appearances (like this) designed to disrupt and damage his 2020 presidential re-election campaign. LINK


Tipos:

Tipos de deepfake: faceswap, deepnude e lipsync

O relatório identifica três principais tipos de deepfake nos vídeos que circulam na web. O mais comum e predominante em vídeos pornográficos é o faceswap, que substitui o rosto de alguém por outro, geralmente de uma pessoa famosa. O lipsync, por outro lado, está mais presente em vídeos de sátira e não envolve a troca do rosto, mas interfere na maneira em que a boca do sujeito se move para fazer parecer que diz algo diferente do que no vídeo original. https://www.techtudo.com.br/listas/2019/10/96percent-dos-videos-de-deepfake-tem-conteudo-pornografico-veja-sete-fatos.ghtml






Já naõ troca apenas humanos por humanos, mas humanos por animais https://freenews.live/ai-turns-humans-into-animals-and-animals-into-humans/