Friday, March 20, 2020

Introdução (questões genéricas)

JUl23
TEXTO MUITO BOM
https://www.thomsonreuters.com/en-us/posts/technology/practice-innovations-deepfakes/


out22
SOM

BBC Radio Hosts Use Deepfake AI to Swap Voices

https://voicebot.ai/2022/10/26/bbc-radio-hosts-use-deepfake-ai-to-swap-voices/


ago22
GOZAR
https://www.theonion.com/this-elon-musk-deepfake-cannot-be-real-1849404887

jul22
60 MINUTES

https://www.cbsnews.com/news/deepfake-artificial-intelligence-60-minutes-2022-07-31/

ab22

Deepfakes and AI-generated faces are corroding trust in the web. The ease with which false digital identities, images and videos can be made is threatening commerce and society alike



mar22

Deepfakes: The New Ticket to Immortality?
https://www.rollingstone.com/culture-council/articles/the-new-ticket-to-immortality-1324513/


mar22

Deepfakes may use new technology, but they’re based on an old idea

https://www.popsci.com/technology/deepfakes-history-museum-exhibit/

fev22
AUDIO

It’s not just your face that can be convincingly replicated by a deepfake. It’s also your voice — quite easily as journalist Chloe Beltman found:

Given the complexities of speech synthesis, it’s quite a shock to find out just how easy it is to order one up. For a basic conversational build, all a customer has to do is record themselves saying a bunch of scripted lines for roughly an hour. And that’s about it.

“We extract 10 to 15 minutes of net recordings for a basic build,” says Speech Morphing founder and CEO Fathy Yassa.

The hundreds of phrases I record so that Speech Morphing can build my digital voice double seem very random: “Here the explosion of mirth drowned him out.” “That’s what Carnegie did.” “I’d like to be buried under Yankee Stadium with JFK.” And so on.

But they aren’t as random as they appear. Yassa says the company chooses utterances that will produce a wide enough variety of sounds across a range of emotions – such as apologetic, enthusiastic, angry and so on – to feed a neural network-based AI training system. It essentially teaches itself the specific patterns of a person’s speech.

https://mindmatters.ai/2022/02/deepfakes-can-replicate-human-voices-now-maybe-yours/  


 

JAN22
CHEAPFakes
Muito mais danoso do que os deepfakes (que demandam tempo e, diga-se de passagem, talento para que fiquem convincentes) são o que vamos chamar de “toscofakes”. Trata-se da criação de um discurso com trechos picotados de uma fala. A prova disso está no número de “toscofakes” que tivemos que desmentir em nossa história. https://www.boatos.org/opiniao/em-tempos-de-polarizacao-videos-toscofakes-tem-muito-mais-alcance-do-que-os-deepfakes.html


Dez21
GERAL
Besides the fun aspect, deepfakes can be beneficial in education, forensics, film production and artistic expression. However, like all other tech advancements, its cons have overshadowed the benefits today. Deepfakes are largely being used to make ‘viral’ fake videos, defaming celebrities and important people worldwide, eroding humankind’s belief in technology. With the amount of power vested in deepfake technology, hopefully, we will soon have guidelines concerning morals, rights and privacy when it comes to its usage. 
https://analyticsindiamag.com/most-shocking-deepfake-videos-of-2021/


Dez21
ENTREVISTA COM SARTORI
https://www.uol.com.br/splash/noticias/2021/12/02/fake-em-nois-tudo-o-que-voce-precisa-saber-sobre-deepfake.htm

nov21
MUITO BOM
https://www.billboard.com/music/rock/foo-fighters-love-dies-young-video-jason-sudeikis-1234999776/

jul21
mai21

‘NCIS: Los Angeles’: The Team Investigates a Deep Fake Video of a Dead Terrorist

https://www.cheatsheet.com/entertainment/ncis-los-angeles-team-investigates-deep-fake-video-of-dead-terrorist.html/

ab21
FRASES
https://www.nyoooz.com/features/technology/deepfakes-are-good-and-getting-better-which-is-bad-and-getting-worse.html/5724/ 

abr21
Prolema

Misinformation Makes Every Day April Fools’ Day

https://insidesources.com/misinformation-makes-every-day-april-fools-day/

Mar21
Problema

Algumas pessoas partilharam nas redes sociais que reanimar os entes queridos nos vídeos da MyHeritage fê-los chorar de alegria. Estou solidário com isso. D-ID diz que, de acordo com a sua análise, apenas 5% dos tweets sobre o serviço eram negativos. Mas quando tentei fazer um vídeo com a foto de um amigo que faleceu há alguns anos, não me senti bem. Eu sabia que o meu amigo não se mexia assim, com o leque limitado dos maneirismos gerados pelo computador. “Quero realmente que estas pessoas e esta tecnologia mexam com as minhas memórias?”, diz Johnson, especialista de ética. “Se quiser fantasmas na minha vida, quero que sejam reais.” Os deepfakes são também uma forma de mentirmos a nós próprios.  https://www.publico.pt/2021/03/26/p3/noticia/qualquer-pessoa-iphone-deepfake-nao-prontos-vai-acontecer-1956134 


MAR21
CONCEITO
Turek also highlighted a media manipulation technique called “deepfaking.” This is a process where the media manipulator trains two computer programs, called neural networks, using footage of a desired person, often a celebrity or politician, who they want to “fake” and footage of an actor whose facial expressions and movements will then be mapped onto the desired person’s face. In simple terms, deepfaking can make it look as if the desired person did or said something that they, in reality, did not. 
https://ndsmcobserver.com/2021/03/lecture-explores-deepfakes-media-manipulation/ 
~

fev21
MEMES
Why it matters: For years, there's been growing concern that deepfakes (doctored pictures and videos) would become truth's greatest threat. Instead, memes have proven to be a more effective tool in spreading misinformation because they're easier to produce and harder to moderate using artificial intelligence.
"When we talk abut deepfakes, there are already companies and technologies that can help you understand their origin," says Shane Creevy, head of editorial for Kinzen, a disinformation tracking firm. "But I'm not aware of any tech that really helps you understand the origin of memes." https://www.axios.com/memes-misinformation-coronavirus-56-2c3e88be-237e-49c1-ab9d-e5cf4d2283ff.html

outu20

The biggest threat of deepfakes isn’t the deepfakes themselves. The mere idea of AI-synthesized media is already making people stop believing that real things are real. https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/


dez20
Four Deepfake Scenarios That Mess With Our Minds
https://interactive.yr.media/double-take-four-deepfake-scenarios/

Nov20
There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.” LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINK


nov20

Cinco riscos da tecnologia deepfake
https://www.techtudo.com.br/listas/2020/11/cinco-riscos-da-tecnologia-deepfake.ghtml

nov20


nov20
As we frequently write, deepfake technology has been commoditized. LINK

out20
The owner of BabyZone, a YouTube gaming channel with over half a million subscribers that often deepfakes celebrities into video games, echoes a similar concern: “I think that deepfakes are a technology like all the other existing technologies. They can be used for good and for bad purposes.” https://finance.yahoo.com/news/deepfakes-dangerous-technology-creators-regulators-151522334.html  Are deepfakes a dangerous technology? Creators and regulators disagree


ou20
Outro problema da sociedade atual se une a este, com a polarização gerada pelas redes sociais constituindo um ponto que leva as pessoas a estarem mais propensas a acreditarem em um deep fake caso ele passe uma mensagem que seja conveniente. “As pessoas querem acreditar no que estão vendo, seja por seus princípios políticos ou, simplesmente, pela forma como enxergam a realidade através de imagens”, afirma, indicando que isso vale, também, no sentido oposto, com imagens verdadeiras que podem ser dispensadas como se fossem falsas, com escândalos e denúncias sendo deixadas de lado enquanto os responsáveis se aproveitam do ambiente de incerteza. https://canaltech.com.br/inteligencia-artificial/sociedade-pode-estar-perdendo-a-guerra-contra-os-deep-fakes-alerta-professor-172775/

set20
It is sometimes surprising how gullible well-intentioned folks are, and how we all can be manipulated by social media. That is the basic conclusion of researchers at the University of Amsterdam’s School of Communications Research and Institute of Information Law, who recently completed a study on “deepfakes.” The researchers point out that the use of deepfakes in microtargeting specific groups on social media and other platforms is concerning. They came to their conclusion by creating a deepfake video of a Dutch politician that was filled with false statements. The researchers had 287 people view the video and then they asked them if they thought the video was credible. According to the researchers, “[I]n a short period of time and with relatively limited technical resources, we were able to construct a deepfake video that was unquestioningly accepted as genuine by most of the participants in our experiment.” They pointed out that there are apps that will help users make deepfakes. https://www.natlawreview.com/article/privacy-tip-252-deepfakes-easy-to-make-and-can-be-used-to-microtarget-specific


set20
Você observa no artigo que o deepfake é feito com mulheres para pornografia e com homens para política. O que você acha desse recorte de gênero nesse uso?... LINK

 
set20
Beware the audio deepfake, which can be far more damaging politically than video. https://www.salon.com/2020/09/02/audio-deepfake-twenty-thousand-hertz-dallas-taylor/


BOM OU MAU???
ago20
positivo?
A New Tool Aims to Protect Protesters From Facial Recognition With Deepfakes. The service was born out of GDPR compliance tech.


https://ninaschick.org/deepfakes

ago20
We need e-guardian angels to fight deepfakes https://www.businesslive.co.za/ft/life/2020-08-13-we-need-e-guardian-angels-to-fight-deepfakes/

ago20

Fácil acesso a softwares para criação de deepfakes preocupa pesquisadores. Principal uso dessa tecnologia atualmente é para criação de vídeos de humor, mas ela já foi usada para fraudes e fake news https://olhardigital.com.br/noticia/facil-acesso-a-softwares-para-criacao-de-deepfakes-preocupa-pesquisadores/104767


agot20
Deepfakes Are Getting Better, Easier to Make, and Cheaperhttps://www.defenseone.com/technology/2020/08/deepfakes-are-getting-better-easier-make-and-cheaper/167536/

jul20
O que é
https://today.rtl.lu/news/science-and-environment/a/1545337.html


jun20
Deepfakes will tell true lies https://www.securitymagazine.com/articles/92674-deepfakes-will-tell-true-lies


jun20
That viral video of George Floyd’s death shows why deepfakes are incredibly dangerous. There was never any serious question about the video’s authenticity, which added to its impact. Without getting into the complicated issues of the relationship between videos and total reality, I will simply point out that the Floyd video created a hypersensitized public ready to be outraged by anything similar in nature, whether real or otherwise.There are actors and institutions out there which would like nothing better than to sow discord in a city, a state, or a nation. The 2016 presidential elections provided abundant evidence that Russia engaged in a well-resourced attempt to influence the election in ways that were disguised to appear as simply concerned U. S. citizens expressing their opinions, or spreading rumors.https://mercatornet.com/that-viral-video-of-george-floyds-death-shows-why-deepfakes-are-incredibly-dangerous/63626/

jun20
Two-thirds of participants believed that one day it would be impossible to discern a real video from a fake one
Forty-two percent of people believed it is very or extremely likely that deepfakes will be used to mislead voters in 2020 https://www.twingate.com/research/proof-threshold-exploring-how-americans-perceive-deepfakes


jn20

“Em janeiro de 2019, deepfakes eram travadas e mostravam tremulações”, disse Hany Farid, professor da UC Berkeley e especialista em deepfake. “Nove meses depois, nunca vi nada parecido com o quão rápido eles estão indo. Esta é a ponta do iceberg.” Hoje, estamos em um ponto de inflexão. Nos próximos meses e anos, as deepfakes ameaçam sair de um estado de raridade na internet para uma força política e social amplamente destrutiva. A sociedade precisa agir agora para se preparar. (...) “Antigamente, se você queria ameaçar os Estados Unidos, precisava de 10 porta-aviões, armas nucleares e mísseis de longo alcance”, disse recentemente o senador Marco Rubio. “Hoje tudo o que você precisa é a capacidade de produzir um vídeo falso muito realista que possa prejudicar nossas eleições, que possa lançar nosso país em uma tremenda crise interna e nos enfraquecer profundamente.”

 https://forbes.com.br/colunas/2020/06/por-que-o-mundo-nao-esta-preparado-para-os-estragos-que-as-deepfakes-podem-causar/


maio20
“In January 2019, deep fakes were buggy and flickery,” said Hany Farid, a UC Berkeley professor and deepfake expert. “Nine months later, I’ve never seen anything like how fast they’re going. This is the tip of the iceberg.” Technologists agree. In the words of Hani Farid, one of the world's leading experts on deepfakes: “If we can't believe the videos, the audios, the image, the information that is gleaned from around the world, that is a serious national security risk.”https://www.forbes.com/sites/robtoews/2020/05/25/deepfakes-are-going-to-wreak-havoc-on-society-we-are-not-prepared/#1d9ec89f7494


maio20
Concernant deepfake, la commission propose « vidéotox », un autre mot-valise, cette fois entre vidéo et intox. Les deepfakes sont ces vidéos dans lesquelles un visage est remplacé par un autre au moyen d’un trucage informatique, afin de donner l’illusion que c’est une autre personne qui est mise en scène. Les deepfakes sont en plein essor, du fait des progrès en apprentissage automatique, une discipline de l’IA.https://www.numerama.com/tech/626271-ne-dites-plus-spoil-ou-deepfake-mais-divulgachis-et-videotox.html

maio20

'Deepfakes' com Bolsonaro, Regina e divas pop rendem sucesso, dinheiro e ameaças https://www1.folha.uol.com.br/ilustrada/2020/05/deepfakes-com-bolsonaro-moro-e-divas-pop-rendem-sucesso-dinheiro-e-ameacas.shtml

GERAL:
Walter Benjamin, the German cultural critic who lived in the early 1900s, wrote about how technological advancements of his day transformed art and culture. In his landmark 1935 essay 'The Work of Art in the Age of Mechanical Reproduction', Benjamin argued that art is the product of a conversation between technology and society. The influence of technology on art is a complication, he wrote, and society has an essential role to play when accepting a new art form, and should rely on scholars and curators to offer context. Without trust in certain institutions, this process falls on its face https://www.thenational.ae/arts-culture/art/can-deepfakes-be-used-for-good-1.987522


mar20 Today, the world captures over 1.2 trillion digital images and videos annually - a figure that increases by about 10% each year. Around 85% of those images are captured using a smartphone, a device carried by over 2.7 billion people around the world. https://www.weforum.org/agenda/2020/03/how-to-make-better-decisions-in-the-deepfake-era/


Deepfakes pose an enormous risk to reputations in politics and entertainment, as well as viewers’ ability to discern what is real. Intellectual property law may provide recourse, but how would deepfakes be assessed in claims under existing invasion of privacy, defamation, or right of publicity law? Is new legislation warranted to tackle this complex issue head on? How can we promote awareness of this phenomenon and increase the public’s ability to discern between a credible source and a hoax? Join us for a dynamic discussion! https://www.msk.com/newsroom-events-1263
Mar20: Deepfakes are so new that the word “deepfake” is not yet officially listed in Merriam-Webster’s dictionary, so what is this fresh technology that's taking over the internet? https://www.forbes.com/sites/forbestechcouncil/2020/03/04/whos-responsible-for-combatting-deepfakes-in-the-2020-election/#6b72ba4e1c05

FEV20 “If fake news is scary, deepfakes are terrifying. Not only are we rapidly approaching the point where anyone can create convincing video of anyone doing anything, we are rapidly approaching the point where everyone has plausible deniability of everything they’ve ever done -- even if there’s video evidence. All you have to do is claim it’s a deepfake, and BAM! reasonable doubt.” https://www.mediapost.com/publications/article/347109/no-its-not-a-deepfake-yes-the-language-matters.html
Fev20 (Deepfakes não são a maior ameaça)  But the majority of visual misinformation that people are exposed to involves much simpler forms of deception. One common technique involves recycling legitimate old photographs and videos and presenting them as evidence of recent events. https://www.niemanlab.org/2020/02/who-needs-deepfakes-simple-out-of-context-photos-can-be-a-powerfully-low-tech-form-of-misinformation/
FEv20 Twitter: We'll kill deepfakes but only if they're harmful (estabelece um critério)  (LINK
FEV20 IMPACTOS: there is concern about  potential growth in the use of deepfakes for other purposes, particularly disinformation. Deepfakes could be used to influence elections or incite civil unrest, or as a weapon of psychological warfare. They could also lead to disregard of legitimate evidence of wrongdoing and, more generally, undermine public trust in audiovisual content https://www.gao.gov/products/gao-20-379sp

Previsão Jan20: “Facebook’s problems moderating deepfakes will only get worse in 2020. New tools will make deepfakes more accessible and less obviously harmful” LINK
DF são vídeos manipulados com o objetivo de, enganando os destinatários, para obter resultados (quais?)
Depfakes tb são exercicio de criatividade [como combater quando é exercício de criatividade]? Como o FB vai contrariar?
??? Crackdown on 'deepfakes' indicates danger to credible media dissemination            
Deepfakes: quanto mais fáceis de fazer, mais perigosos são
Deepfakes: Can Technology Stop Something It Has Created?

How can organisations respond to the threat of deepfake?

ONLINE MANIPULATION: INFORMED DIGITAL CITIZENS ARE THE BEST DEFENSE AGAINST DEEPFAKEs

FAZER o Download do documento: https://deeptracelabs.com/resources/
Impossivel de combater? “Speaking to Yahoo Finance Truepic CEO Jeff McGregor said the rapid advancement of A.I., and the scale at which visual deception is being disseminated, poses major challenges to researchers developing tools for detection. Instead of trying to detect what’s false, McGregor stated companies should focus on establishing what’s real.” LINK

Deepfakes: A threat to democracy or just a bit of fun?




"This technology is so realistic it's actually making people think twice about whether seeing is actually believing," said Sherin Mathews, senior data scientist for McAfee. https://www.govtech.com/products/Deepfakes-The-Next-Big-Threat-to-American-Democracy.html

"I don't think the deepfake technology is actually the catalyst of disinformation; it's social media," Li said, explaining that these online networks are where fictions are spread. https://www.govtech.com/products/Deepfakes-The-Next-Big-Threat-to-American-Democracy.html

 deepfakes and other artificial intelligence (AI)-based techniques for low-cost creation of fake videos represent a new era of digital threats.

30/10/2019 deepfakes, which use artificial intelligence to produce lifelike video and audio of real people doing unreal things. Today, three technological shifts have taken media manipulation to new levels: the development of large image databases, the computing power of graphics processing units (GPUs), and neural networks, an AI technique. The number of deepfake videos published online has doubled in the past nine months to almost 15,000 according to DeepTrace, a Netherlands-based cyber security group. (LINK)


Reuters + FB Dez19: “Into this landscape comes a new threat: that of so-called ”deepfakes” – a form of synthetically-generated media. Interest around deepfakes has grown dramatically this year. Google Trends data shows that searches for the term peaked in June, around the time a deepfake video was released that featured the Facebook chief executive Mark Zuckerberg. As we close this year, the number of searches for “deepfake” is five times’ higher than at the end of 2018https://www.reuters.com/article/rpb-hazeldeepfakesblog/introducing-the-reuters-guide-to-manipulated-media-in-association-with-the-facebook-journalism-project-idUSKBN1YY14C

Porque funcionam:

After the U.K. Information Commissioner's Office launched an investigation into the use of facial-recognition technology at King’s Cross and the exposure of 1 million U.K. citizens’ biometric information, professionals have raised concerns over the potential misuse of the data, Bloomberg reports. ABI Research Director Michela Menting said biometric data could be used to create “deepfakes.” “The key ingredient in truly credible deepfakes is having a lot of data on the subject, and notably video of a person in any number of different facial expressions,” Menting said https://iapp.org/news/a/misused-biometric-data-could-be-used-to-create-deepfakes/


29/10/2019  a new BBC series The Capture brings to life many of the fears about so-called “deepfakes,” in which the protagonist is accused of kidnapping a woman based on a video doctored by the British secret services. It may seem fantastical, but such technology already exists today (LINK) (LINK)

No comments:

Post a Comment