Friday, March 20, 2020

Origem, definições, conceito, cheapfakes, evolução

novwe23

“There’s a video of Gal Gadot having sex with her stepbrother on the internet.” With that sentence, written by the journalist Samantha Cole for the tech site Motherboard in December, 2017, a queasy new chapter in our cultural history opened. A programmer calling himself “deepfakes” told Cole that he’d used artificial intelligence to insert Gadot’s face into a pornographic video. And he’d made others: clips altered to feature Aubrey Plaza, Scarlett Johansson, Maisie Williams, and Taylor Swift.

This is the smirking milieu from which deepfakes emerged. The Gadot clip that the journalist Samantha Cole wrote about was posted to a Reddit forum, r/dopplebangher, dedicated to Photoshopping celebrities’ faces onto naked women’s bodies. This is still, Cole observes, what deepfake technology is overwhelmingly used for. Able to depict anything imaginable, people just want to see famous women having sex. A review of nearly fifteen thousand deepfake videos online revealed that ninety-six per cent were pornographic. These clips are made without the consent of the celebrities whose faces appear or the performers whose bodies do. Yet getting them removed is impossible, because, as Scarlett Johansson has explained, “the Internet is a vast wormhole of darkness that eats itself.”

https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review


jul23 (ABRIL210)

A frase do Reddit não cai do ceu...

The origin story of deepfakes goes all the way back to the year 1997. It was the Video Rewrite program by Bregler et al. that first published a study about it as a result of a conference on computer graphics and interactive techniques. The experts explained how to modify existing video footage of a person speaking with a different audio track.

It wasn’t a new concept (photo manipulation already happened back in the 19th century), but the seemingly insignificant study on altering video content was the first of its kind that fully automated the process of facial re-animation. The help of machine learning algorithms was used to achieve this. It was a huge milestone and quite possibly the real starting point of deepfake video development around the globe.

However, it wasn’t until years later that the concept of deepfakes really started to catch on. Just like many new technologies, the mass-adoption comes quite a bit of time later after the invention. First came the Face2Face program in 2016, in which Thies et al. showcased how real-time face capture technology could be used to re-enact videos in a realistic way.

It wasn’t until July 2017 that more people started to be interested in the applications of deep learning and media. We were introduced to a highly realistic deepfake video featuring former US President Barack Obama. The study by Suwajanahorn et al. showed for the first time how audio could be lip synced in a frighteningly realistic way on politicians. A dangerous precedent that could potentially transform the course of the political future.

After that, things went very quickly. The adoption and introduction of the technology by the masses quite literally exploded, as more and more deepfake videos and similar applications of the machine learning technique started to be implemented around the globe. The Obama deepfake was a popular one, and a year later it was re-iterated in the now infamous BuzzFeed YouTube video titled “You Won’t Believe What Obama Says In This Video!”:

https://deepfakenow.com/history-deepfake-technology-how-deepfakes-started/


nov22

Synthetic video: don’t call them deepfakes [face à carga negativa que a palavra deepfakes encerra, não é indiferente chamart-lhe ou não assim. se lhe chamar deepfake conseguirei ter um sitgnificante positivo ou ficará de imediato a desconfiança?] Este artigo aborda essa confusão e os equivocos criados.


Although there is no universally agreed-upon definition, a typical deepfake uses AI to replace a person in an existing video with another. The vast majority of deepfakes are used to switch pornographic actors with celebrity women, but they have attracted popular attention as tools of political disinformation. In March, a poor-quality deepfake of President Volodymyr Zelensky announcing Ukraine’s surrender to Russia surfaced on social media for a brief round of ridicule, before being removed. (...) Deepfakes – and perhaps, by extension, synthetic video – have a public image problem. Synthetic video companies are wary of association with abusive deepfakes, prominently laying out their ethical frameworks to make clear their distance. ... Riparbelli, meanwhile, seems more relaxed about the association. “We don’t call ourselves deepfakes, but it’s not like: ‘oh no we’re definitely not deepfakes.’ We’re definitely building technology that you could put in the deepfake family of technologies, I guess,” he says. “I wouldn’t say I spend a lot of time trying to escape the deepfake narrative. I welcome it; people find it interesting.” He believes that companies like Synthesia are already providing evidence that synthetic video is a general-purpose technology with many potential applications beyond abusive deepfakes: “The popular narrative is very focused on deepfakes, which makes a lot of sense. It certainly is a real threat. It’s causing real harm today. But I think it often becomes a red herring for a much more fundamental shift we’re going through, which is to switch from traditional software to AI ... and I think it misses the myriad of applications of what you could very well call deepfake technology that we’re all using every single day.”
https://eandt.theiet.org/content/articles/2022/11/synthetic-video-don-t-call-them-deepfakes/

jul22

Researchers Found A Way To Animate High Resolution Deepfakes From A Single Photo, And It's Unsettling

https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/

https://digg.com/video/high-resolution-deepfakes-from-a-single-photo-and-its-unsettling



out21
FAZ SENTIDO?
War of the Worlds – The Original 'Deep Fake'. Marking the anniversary of Orson Welles' realistic radio dramatization of a Martian invasion of Earth, the original 'deepfake' is revisited amid our present-day social media and political atmosphere with original audio and interviews with Orson Welles, his collaborator John Houseman, writer Howard Koch, and A. Brad Schwartz historian and author of BROADCAST HYSTERIA: Orson Welles's War of the Worlds and the Art of Fake News, among others.
https://finance.yahoo.com/news/interrupt-broadcast-host-bill-kurtis-165700861.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE4xP9kO2tigeGtjBkv14JWX_w8I-KMzGpdUd4vlBy_9tTLrRrWzqYOUUsMWYjYGVfqSByoBV9J4aNOP2ZLxwQglNVW1kVlIgni-TQMHUgMi6DUKm0XSaZg4VM3xUnwFHCA2MrW2fB1wrWBrnfIn7kv0HfB47uL1CdUv5AU35JUg

out21
ALARGAR O CONCEITO

1)     A growing problem of 'deepfake geography': How AI falsifies satellite images LINK



set21
COMO FUNCIONA

Deepfakes are often powered by an innovation in deep learning known as a “generative adversarial network,” or GAN. GANs deploy two competing neural networks in a kind of game that relentlessly drives the system to produce ever higher quality simulated media. For example, a GAN designed to produce fake photographs would include two integrated deep neural networks. The first network, called the “generator,” produces fabricated images. The second network, which is trained on a dataset consisting of real photographs, is called the “discriminator.” This technique produces astonishingly impressive fabricated images. Search the internet for “GAN fake faces” and you’ll find numerous examples of high-resolution images that portray nonexistent individuals. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440


JU2
ORIGEM /HISTORIA
The specific AI advances that made deepfakes possible occurred around 2012, in
an AI technique called “deep learning.” Deep learning9 radically improved AI’s ability
to perceive things such as images, audio or video (for accessible discussions see
(Wright, 2019b)). Deep learning AI is now very good at perceiving images or sounds
– and essentially turning those computer programs back-to-front instead generates
images or sounds. This makes convincing “deepfakes”, which emerged around 2018
to make fake pornography.
(no arquivo)Cognitive defense of the Joint Force in a digitizing world

jul21
origem da palavra
https://www.wsj.com/articles/deepfake-a-piece-of-thieves-slang-gets-a-digital-twist-11626983869

JUN221
TEXTO
O Facebook revelou um novo projeto que, aproveitando a tecnologia de IA da empresa, é capaz de imitar a escrita de uma pessoa bastando ter apenas uma palavra como base de estudo. Este sistema analisa a escrita do utilizador, e é capaz de converter qualquer outro texto que se pretenda para uma grafia similar.

Apelidado de TextStyleBrush, este novo sistema é capaz de reconhecer por IA o estilo de escrita de cada utilizador, bastante ter uma palavra como estudo. Feito isso, o sistema é capaz de replicar qualquer texto que se pretenda dentro dessa grafia, criando resultados verdadeiramente surpreendentes.

  Basicamente, podemos considerar esta tecnologia como um “deepfake” para escrita – o que pode ser visto como algo bom ou mau. O sistema não necessita de ficar limitado apenas a um estilo de escrita, é capaz também de reproduzir fontes.

Por exemplo, se possui um texto numa determinada fonte que pretenda aplicar noutro conteúdo, o TextStyleBrush é capaz de recriar a mesma – até se esta fonte for algo digital, e não propriamente uma escrita manual.

https://tugatech.com.pt/t39571-facebook-utiliza-ia-para-projeto-capaz-de-criar-deepfakes-de-qualquer-texto

Maio21
VER
https://www.tvovermind.com/its-harry-potter-but-with-american-actors-deepfake-video/


maio21

The Girlfriend Experience Season 3, Episode 3 recap: Deep Fake
https://showsnob.com/2021/05/09/the-girlfriend-experience-season-3-episode-3-recap-deep-fake/

Fev21

É OU NAO UM DF? Hour One’s promise to customers is that, after a relatively quick onboarding process, the company’s artificial intelligence can create a fully digital version of you, with the ability to say and do whatever you want it to. The company has partnered with YouTuber Taryn Southern to show off the tech’s capabilities. The biggest takeaway from this inside look is that Hour One’s clones are not deepfakes. A deepfake is created by manipulating an image to fabricate the likeness of a person, usually on top of existing video footage. Hour One’s clones, in comparison, require studio time to capture a person’s appearance and voice... and, therefore, consent. Southern says she stood in front of a green screen for about seven minutes, read a few scripts, and sang a song. This difference is noteworthy in that this capture process allows for a much fuller “cloning” process. Hour One can now feed just about any script into its program and create a video where it appears that Southern is actually reading it. There’s also an extra layer of consent involved — deepfakes are often made without the subject’s approval, but that’s not possible with Hour One’s technology. https://www.inputmag.com/tech/were-begging-you-to-not-turn-yourself-into-ai-powered-clone

MUITO BOM
https://news.yahoo.com/celebrity-deepfakes-shockingly-realistic-204210734.html

fev21

PRE-DEEPFAKES New Hampshire faced a variation this issue 16 years ago in a story I covered extensively. It involved a summer camp for girls, whose official photographer used Photoshop to put faces of teenage campers younger than 16 onto the bodies of adult women in pornography pictures. These “morphed” pictures, which he called “personal fantasies,” were discovered by accident when he didn’t remove them from CD-ROMs of official camp photos. He was arrested and convicted of possessing child pornography.

https://granitegeek.concordmonitor.com/2021/02/14/n-h-wrestled-with-deepfake-pornography-long-before-the-tech-existed/


dez20

In 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user by the name of “deepfakes” was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities’ faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could “strip” clothes off any woman photographed.

https://www.technologyreview.com/2020/12/24/1015380/best-ai-deepfakes-of-2020/



sobre os Cheapfakes em 2020

https://www.technologyreview.com/2020/12/22/1015442/cheapfakes-more-political-damage-2020-election-than-deepfakes/


será correto usar a designação deepfake video, deepfake audio, etc, atendendo aos varios tipos de deepfakes existentes

tambem podemos usar synthetic media

nov20

The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort.https://www.biometricupdate.com/202011/deepfakes-the-times-wows-the-senate-punts-and-asia-worries + https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html



nov20

UMA BOA EXPLICAÇÃO

Essentially, artificial intelligence is a form of technology that makes computers behave in ways that could be considered distinctly human, such as being able to reason and adapt. One common kind of artificial intelligence is machine learning, which focuses on using algorithms that can improve their performance through exposure to more data. Deepfakes are created using a theory of machine learning called deep learning. A deep learning program uses many layers of algorithms to create structures called neural networks, inspired by the structure of the human brain. The neural networks aim to recognize underlying relationships in a set of data. https://www.mironline.ca/what-the-rise-of-deepfakes-means-for-the-future-of-internet-policies/


out20

 a voice cannot be copyrighted. Midler v. Ford Motor Co. proclaimed that ” A voice is as distinctive and personal as a face. The human voice is one of the most palpable ways identity is manifested.” The court held that not every instance of commercial usage of another’s voice is a violation of law; specifically, a person whose voice is recognizable and widely known receives protection under the law through their right of publicity as protection from invasion of privacy by appropriation. This protects public figures and celebrities from their identities being misappropriated and potentially misused from commercial gain. The right of publicity is the celebrity’s analog for the common person’s right of privacy; both are protected with different underlying motivations—one is to allow a celebrity alone to capitalize on their fame, and another is to allow a private person to remain private. https://www.ipwatchdog.com/2020/10/14/voices-copyrighting-deepfakes/id=126232/

out20

After deepfakes, a new frontier of AI trickery: fake facesAlready, fake faces have been identified in bot campaigns in China and Russia, as well as in right-wing online media and supposedly legitimate businesses. Their proliferation has raised fears that technology poses a more pervasive and pressing threat than deepfakes, as online platforms face a growing wave of disinformation ahead of the US election. Graphika and the Atlantic Council’s Digital Forensic Research Lab report on fake identities, showing telltale signs that Alfonzo Macias’ profile picture is a fake. “A year ago, it was a novelty,” tweeted Ben Nimmo, director of investigations of the intelligence group on social networks Graphika. “Now it seems like every trade we analyze tries this at least once.” https://www.usa-vision.com/after-deepfakes-a-new-frontier-of-ai-trickery-fake-faces/

A website called This Person Does Not Exist publishes a near-infinite number of the fake faces online, created in real time every time you refresh the page. “A year ago, this was a novelty,” says Ben Nimmo, director of investigations at social media intelligence group Graphika. “Now it feels like every operation we analyse tries this at least once.” With GAN technology available across the web, it’s impossible to be certain that the stranger you’re talking to on Facebook or Twitter has ever existed. https://www.dailystar.co.uk/news/latest-news/ai-deepfakes-creating-fake-humans-22840694

Na sexta-feira, o canal americano NBC divulgou uma investigação que destacava as muitas perguntas sobre a identidade e as fontes do documento de 64 páginas. Por exemplo, a emissora descobriu que a pessoa que era apresentada como seu autor, um suposto analista suíço chamado Martin Aspen, não existia, que sua identidade foi inventada e sua foto produzida por softwareshttps://br.noticias.yahoo.com/magnata-hong-kong-n%C3%A3o-tem-114645502.html


============================AULA TCM202/21_________________________

ORIGEM

Farid aponta, por outro lado, que as manipulações digitais de imagens não são uma novidade, pelo contrário, existem desde os anos 1990 e surgem junto com a evolução dos softwares gráficos. O que começou na forma de alterações visuais de imperfeições ou até mudanças diretas no discurso, entretanto, evoluiu para algo muito mais perigoso, enquanto a relação das pessoas com as imagens se manteve inalterada. Na visão do especialista, os indivíduos sempre estiveram cientes de que imagens podem ser manipuladas e que, ao saírem do mundo, passando através de uma câmera e, depois, por softwares como o Photoshop, elas ainda são, em maior ou menor grau, uma reprodução da realidade. O problema, segundo ele, começou quando a inteligência artificial evoluiu ao ponto de criar farsas completas, que vêm sendo utilizadas para fins nada positivos. https://canaltech.com.br/inteligencia-artificial/sociedade-pode-estar-perdendo-a-guerra-contra-os-deep-fakes-alerta-professor-172775/


Pre-deepfakes 

(2014) 

https://www.youtube.com/watch?v=lc9t1jNmtWc

“How Dove Brought Audrey Hepburn Back to Life” (LINK)

HISTORIA set20

New “Deepfake” Can Change “Entire Person” https://communalnews.com/new-deepfake-can-change-entire-person/

set20

Deepfakes Are Bad - Deepfakes Are Good. 
https://www.forbes.com/sites/glenngow/2020/09/11/deepfakes-are-baddeepfakes-are-good/#7cc365704ba0

ago20

In the past, the audiovisual has also suffered the urge to manipulate reality. Hiding certain images, showing some and not others or accompanying a scene of a certain speech with a voiceover has been common both in democracies and dictatorshipsAnd before cinema or television became popular, the static image also played an important role. Hence it has also happened by the hands of editing and manipulation to hide and manipulate reality to the liking of one or the other. Let’s look at three examples. https://www.explica.co/the-deepfakes-of-the-20th-century-how-stalin-franco-and-walt-disney-manipulated-reality-using-primitive-methods/

jul20

nother form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake textLast month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us? (...) Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check ... But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. (...) Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out. In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/ + Forget deepfakes – we should be very worried about AI-generated text. GPT-3 has been trained using millions of pages of text drawn from the Internet and can produce highly credible human writing https://www.telegraph.co.uk/technology/2020/08/26/forget-deepfakes-ai-generated-text-should-worried/


jul20

SERÁ??? As Deepfakes Get Better, The Onus Is On Us To Determine Trustworthiness https://www.mediapost.com/publications/article/354286/as-deepfakes-get-better-the-onus-is-on-us-to-dete.html



Jun20

Émais um episódio na guerra aberta entre o presidente norte-americano e as redes sociais. Esta sexta-feira, o Facebook e o Twitter viram-se obrigados a retirar das suas plataformas um vídeo publicado por Donald Trump, considerado enganoso, de acordo com a CNN. A publicação foi feita na véspera do dia em que se celebrava o Juneteenth, o feriado mais antigo conhecido em homenagem ao fim da escravidão nos EUA. https://www.dn.pt/mundo/abraco-ou-fuga-trump-manipulou-video-de-criancas-e-redes-sociais-apagaram-no-12334737.html


jun20
Deepfakes aren’t very good—nor are the tools to detect them https://arstechnica.com/information-technology/2020/06/deepfakes-arent-very-good-nor-are-the-tools-to-detect-them/?comments=1

maio20
Eco-anxiety, deepfake and cancel culture: The new words added to the Macquarie Dictionary - and what they actually mean https://www.dailymail.co.uk/news/article-8367821/Strange-new-words-added-Macquarie-Dictionary-actually-mean.html


maio20
???? o francês de 23 anos protagonizou num curto período de tempo dois feitos dignos de registo: vendeu-se a si próprio através de uma oferta inicial de moedas digitais (ICO na sigla em inglês) e agora está à venda como modelo deepfake. https://visao.sapo.pt/exameinformatica/noticias-ei/insolitos/2020-05-20-alex-masmej-criptomoeda-deepfake/

mai20
The Washington Post columnist and novelist David Ignatius, who broke the story on Lt. Gen. Michael Flynn’s phone conversation with former Russia Ambassador Sergey Kislyak, discussed his new spy thriller “The Paladin,” whose main character’s life is turned upside down by a ‘deep fake’ media campaign, which interestingly resembles circumstances surrounding what happened to Flynn.https://saraacarter.com/the-washington-post-writer-behind-flynn-leaks-warns-in-new-spy-thriller-about-deep-fakes/


Abr20 The first generation (Deepfakes 1.0) was largely used for entertainment purposes. Videos were modified or made from scratch in the pornography industry and to create spoofs of politicians and celebrities. The next generation (Deepfakes 2.0) is far more convincing and readily available. Deepfakes 2.0 are poised to have profound impacts. According to some technologists and lawyers who specialize in this area, deepfakes pose “an extraordinary threat to the sound functioning of government, foundations of commerce and social fabric.” https://www.justsecurity.org/69677/deepfakes-2-0-the-new-era-of-truth-decay/


Although the first deepfakes were created about five years ago, the term was not coined until 2017 in the community Reddit, becoming popular gradually since then. With the advancement and progression especially in the field of artificial intelligence, it is not until this year where for the first time, there is a scale important in the dissemination of videos with images that, at first glance, we may not be able to discern their authenticity. LINK
JAN20: When a fake porn video purporting to depict Gal Gadot having sex with her stepbrother surfaced online in December 2017, the reaction was swift and immediate. Vice—the outlet that first reported on the video—was quick to highlight the way that face-swap technology could be used to manufacture a wholly new form of “revenge porn,” one in which victims could find themselves featured in explicit sexual media without ever taking off their clothes in front of a camera. LINK
November 19: deepfake é acrescentado ao dicionário Collins LINK

MAR20Cuando vimos por primera vez a Gal Gadot en un vídeo porno en 2017 la reacción popular fue de asombro seguido por la inquietud. No era ella en realidad, sino un montaje con su rostro en el cuerpo de una actriz porno, pero el resultado era muy creíble. La tecnología deepfake había entrado por la puerta grande y de la forma más llamativa posible. https://www.xataka.com/robotica-e-ia/alla-porno-espanoles-que-usan-deepfakes-para-satira-politica

O que são
A deepfake is altered video content that shows something that didn't actually happen. By definition, deepfakes are produced using deep learning, which is an AI-based technology. Of late, the term deepfake has been used to depict nearly any type of edited video online – from Nancy Pelosi’s slowed speech to a mash-up of Steve Buscemi and Jennifer Lawrence. Given the technical definition, however, the video of Nancy Pelosi does not actually classify as a deepfake but rather simply an altered video, sometimes referred to as “shallow fake.” Although technically different, shallow fakes can cause the same level of potential damage as deepfakes -- the number one risk: disinformation. (LINK)

A deepfake is a video, photo, or audio recording that seems real but has been manipulated with artificial intelligence technologies. https://www.gao.gov/products/gao-20-379sp

AUDIO apenas https://www.predictiveanalyticsworld.com/machinelearningtimes/with-questionable-copyright-claim-jay-z-orders-deepfake-audio-parodies-off-youtube/11421/ +
https://www.youtube.com/watch?v=zBUDyntqcUY&list=PURt-fquxnij9wDnFJnpPS2Q

jun20 
Más de cuatro décadas después de su muerte, Franco ha vuelto a hablar. Así ha sido, al menos, en el podcast de Spotify XRey, que repasa la vida del monarca Juan Carlos I. Haciendo uso de inteligencia artificial, la voz del dictador aparece en él recitando fragmentos que nunca fueron registrados en vida. https://hipertextual.com/2020/06/spotify-franco-deepfake-voz-podcast

jn20
These amazing audio deepfakes showcase progress of A.I. speech synthesis. https://www.digitaltrends.com/news/best-audio-deepfakes-web/

set20
He says the deep fake audio clip was convincing enough to trick people closest to him, including his wife. https://www.wbur.org/hereandnow/2020/09/28/deep-fake-video-audio

Cheapfakes
Researchers say the Pelosi video is an example of a “cheapfake” video, one that has been altered but not with sophisticated AI like in a deepfake. Cheapfakes are much easier to create and are more prevalent than deepfakes, which have yet to really take off, said Samuel Woolley, director of propoganda research at the Center for Media Engagement at University of Texas. LINK
FEV20 Video alterado, sem AI, não é deepfake On Thursday, Bloomberg’s 2020 presidential campaign posted a video to Twitter that was edited to make it appear as though there was a long, embarrassing silence from Bloomberg’s Democratic opponents after he mentioned that he was the only candidate to have ever started a business during Wednesday night’s debate. Candidates like Sens. Bernie Sanders (I-VT), Elizabeth Warren (D-MA), and former South Bend, Indiana mayor Pete Buttigieg are shown searching for the words to respond to Bloomberg’s challenge. Twitter told The Verge that the video would likely be labeled as manipulated media under the platform’s new deepfakes policy that officially goes into place on March 5th.However, Facebook spokesperson Andy Stone confirmed on Twitter that the same video would not violate the platform’s deepfakes rules if it were posted to Facebook or Instagram. Facebook’s policy “does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words” likely not affecting videos like Bloomberg’s. A video must also be created with an artificial intelligence or machine learning algorithm to trigger a removal. LINK
Mar20 Twitter Twitter uses deepfake alert for the first time on video shared by TrumpThe video showed part of a speech by Democratic candidate Joe Biden but had been edited to mislead viewers about what he said https://www.telegraph.co.uk/technology/2020/03/09/twitter-uses-deepfake-alert-first-time-video-shared-us-president/ RISCOS: https://theblast.com/119699/how-soon-until-manipulated-media-becomes-deepfake-video REFLEXOES: https://news.umich.edu/cheap-fake-video-making-the-rounds-today-likely-wont-be-the-last/
Ainda CHEAPFAKES: A video shared by President Donald Trump that was edited to make it appear presidential candidate Joe Biden was endorsing his re-election during a campaign rally Saturday was deemed manipulated content by Twitter—a first for the social media company. But Facebook did nothing to flag the video as false content. Biden had stumbled over some words and the video stopped short of including his correction. The video is the latest cheap fake to raise controversy in recent weeks.University of Michigan School of Information Professor Clifford Lampe explains cheap fakes and the difficulty in getting the platforms to police them. LINK
Mar20 even if deepfakes never proliferate in the public domain, the world has nevertheless been upended by 'cheapfakes' - a term that refers to more rudimentary image manipulation methods such as photoshopping, rebroadcasting, speeding and slowing video, and other relatively unsophisticated techniques. Cheapfakes have already been the main tool in the proliferation of disinformation and online fraud, which have had significant impacts on businesses and society. https://www.weforum.org/agenda/2020/03/how-to-make-better-decisions-in-the-deepfake-era/
Conceito de deepfakes: obrigatoriamente vídeo ou não?
·         JAN20Unos investigadores del International Institute of Information Technology en Hyderabad, India, han desarrollado un sistema de inteligencia artificial capaz de crear vídeos deepfakes traducidos a diferentes idiomas. No hablamos solo de "audio", es decir, de que el sujeto primero hable inglés y luego hable en español, sino que el software usa inteligencia artificial para emular el movimiento de los labios para arrojar un resultado más realista. LINK
25/11/2019 O CONCEITO evolução do conceito: What are deepfakes? Misinformation videos becoming more ‘powerful, precise’ https://globalnews.ca/news/5382150/deepfakes-shallow-fakes-misinformation/
One troubling issue with deepfakes is simply determining what is a deepfake and what is just edited video. In many cases deepfakes are built by utilizing the latest technology to edit or manipulate video. News outlets regularly edit interviews, press conferences and other events when crafting news stories, as way to highlight certain elements and get juicy sound bites. LINK

Mar20 “deepfakes” — the nickname for computer-generated, photorealistic media created via cutting-edge artificial intelligence technology. LINK
MAR20 AUDIO audio deepfakes are on the rise as well. Although still somewhat detectable, the technology continues to improve and get better. What’s scarier is this technology is looking more disruptive in the context of intellectual property and privacy law than you may think.If you think I am being alarmist, think again. According to Siwei Lyu, director of SUNY Albany’s machine learning lab, as quoted in Axios, “having a voice [that mimics] an individual and can speak any words we want it to speak” will be a reality in a couple of years. Realistic audio deepfakes are not something on the horizon — they are on the doorstep. In this political season it is easy to see how such deepfakes may be used. For example, it’s not hard to imagine deep fake audio of Bernie Sanders’ voice designed to erode his primary chances, or audio attributed to President Trump that has been pieced together from his numerous interviews and appearances (like this) designed to disrupt and damage his 2020 presidential re-election campaign. LINK


Tipos:

Tipos de deepfake: faceswap, deepnude e lipsync

O relatório identifica três principais tipos de deepfake nos vídeos que circulam na web. O mais comum e predominante em vídeos pornográficos é o faceswap, que substitui o rosto de alguém por outro, geralmente de uma pessoa famosa. O lipsync, por outro lado, está mais presente em vídeos de sátira e não envolve a troca do rosto, mas interfere na maneira em que a boca do sujeito se move para fazer parecer que diz algo diferente do que no vídeo original. https://www.techtudo.com.br/listas/2019/10/96percent-dos-videos-de-deepfake-tem-conteudo-pornografico-veja-sete-fatos.ghtml






Já naõ troca apenas humanos por humanos, mas humanos por animais https://freenews.live/ai-turns-humans-into-animals-and-animals-into-humans/

No comments:

Post a Comment