novwe23
“There’s a video of Gal Gadot having sex with her stepbrother on the internet.” With that sentence, written by the journalist Samantha Cole for the tech site Motherboard in December, 2017, a queasy new chapter in our cultural history opened. A programmer calling himself “deepfakes” told Cole that he’d used artificial intelligence to insert Gadot’s face into a pornographic video. And he’d made others: clips altered to feature Aubrey Plaza, Scarlett Johansson, Maisie Williams, and Taylor Swift.
This is the smirking milieu from which deepfakes emerged. The Gadot clip that the journalist Samantha Cole wrote about was posted to a Reddit forum, r/dopplebangher, dedicated to Photoshopping celebrities’ faces onto naked women’s bodies. This is still, Cole observes, what deepfake technology is overwhelmingly used for. Able to depict anything imaginable, people just want to see famous women having sex. A review of nearly fifteen thousand deepfake videos online revealed that ninety-six per cent were pornographic. These clips are made without the consent of the celebrities whose faces appear or the performers whose bodies do. Yet getting them removed is impossible, because, as Scarlett Johansson has explained, “the Internet is a vast wormhole of darkness that eats itself.”
https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review
jul23 (ABRIL210)
A frase do Reddit não cai do ceu...
The origin story of deepfakes goes all the way back to the year 1997. It was the Video Rewrite program by Bregler et al. that first published a study about it as a result of a conference on computer graphics and interactive techniques. The experts explained how to modify existing video footage of a person speaking with a different audio track.
It wasn’t a new concept (photo manipulation already happened back in the 19th century), but the seemingly insignificant study on altering video content was the first of its kind that fully automated the process of facial re-animation. The help of machine learning algorithms was used to achieve this. It was a huge milestone and quite possibly the real starting point of deepfake video development around the globe.
However, it wasn’t until years later that the concept of deepfakes really started to catch on. Just like many new technologies, the mass-adoption comes quite a bit of time later after the invention. First came the Face2Face program in 2016, in which Thies et al. showcased how real-time face capture technology could be used to re-enact videos in a realistic way.
It wasn’t until July 2017 that more people started to be interested in the applications of deep learning and media. We were introduced to a highly realistic deepfake video featuring former US President Barack Obama. The study by Suwajanahorn et al. showed for the first time how audio could be lip synced in a frighteningly realistic way on politicians. A dangerous precedent that could potentially transform the course of the political future.
After that, things went very quickly. The adoption and introduction of the technology by the masses quite literally exploded, as more and more deepfake videos and similar applications of the machine learning technique started to be implemented around the globe. The Obama deepfake was a popular one, and a year later it was re-iterated in the now infamous BuzzFeed YouTube video titled “You Won’t Believe What Obama Says In This Video!”:
https://deepfakenow.com/history-deepfake-technology-how-deepfakes-started/
nov22
Synthetic video: don’t call them deepfakes [face à carga negativa que a palavra deepfakes encerra, não é indiferente chamart-lhe ou não assim. se lhe chamar deepfake conseguirei ter um sitgnificante positivo ou ficará de imediato a desconfiança?] Este artigo aborda essa confusão e os equivocos criados.
jul22
Researchers Found A Way To Animate High Resolution Deepfakes From A Single Photo, And It's Unsettling
https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/
https://digg.com/video/high-resolution-deepfakes-from-a-single-photo-and-its-unsettling
https://finance.yahoo.com/news/interrupt-broadcast-host-bill-kurtis-165700861.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE4xP9kO2tigeGtjBkv14JWX_w8I-KMzGpdUd4vlBy_9tTLrRrWzqYOUUsMWYjYGVfqSByoBV9J4aNOP2ZLxwQglNVW1kVlIgni-TQMHUgMi6DUKm0XSaZg4VM3xUnwFHCA2MrW2fB1wrWBrnfIn7kv0HfB47uL1CdUv5AU35JUg
Deepfakes are often powered by an innovation in deep learning known as a “generative adversarial network,” or GAN. GANs deploy two competing neural networks in a kind of game that relentlessly drives the system to produce ever higher quality simulated media. For example, a GAN designed to produce fake photographs would include two integrated deep neural networks. The first network, called the “generator,” produces fabricated images. The second network, which is trained on a dataset consisting of real photographs, is called the “discriminator.” This technique produces astonishingly impressive fabricated images. Search the internet for “GAN fake faces” and you’ll find numerous examples of high-resolution images that portray nonexistent individuals. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440
Apelidado de TextStyleBrush, este novo sistema é capaz de reconhecer por IA o estilo de escrita de cada utilizador, bastante ter uma palavra como estudo. Feito isso, o sistema é capaz de replicar qualquer texto que se pretenda dentro dessa grafia, criando resultados verdadeiramente surpreendentes.
Basicamente, podemos considerar esta tecnologia como um “deepfake” para escrita – o que pode ser visto como algo bom ou mau. O sistema não necessita de ficar limitado apenas a um estilo de escrita, é capaz também de reproduzir fontes.
Por exemplo, se possui um texto numa determinada fonte que pretenda aplicar noutro conteúdo, o TextStyleBrush é capaz de recriar a mesma – até se esta fonte for algo digital, e não propriamente uma escrita manual.
The Girlfriend Experience Season 3, Episode 3 recap: Deep Fake https://showsnob.com/2021/05/09/the-girlfriend-experience-season-3-episode-3-recap-deep-fake/
É OU NAO UM DF? Hour One’s promise to customers is that, after a relatively quick onboarding process, the company’s artificial intelligence can create a fully digital version of you, with the ability to say and do whatever you want it to. The company has partnered with YouTuber Taryn Southern to show off the tech’s capabilities. The biggest takeaway from this inside look is that Hour One’s clones are not deepfakes. A deepfake is created by manipulating an image to fabricate the likeness of a person, usually on top of existing video footage. Hour One’s clones, in comparison, require studio time to capture a person’s appearance and voice... and, therefore, consent. Southern says she stood in front of a green screen for about seven minutes, read a few scripts, and sang a song. This difference is noteworthy in that this capture process allows for a much fuller “cloning” process. Hour One can now feed just about any script into its program and create a video where it appears that Southern is actually reading it. There’s also an extra layer of consent involved — deepfakes are often made without the subject’s approval, but that’s not possible with Hour One’s technology. https://www.inputmag.com/tech/were-begging-you-to-not-turn-yourself-into-ai-powered-clone
MUITO BOM
https://news.yahoo.com/celebrity-deepfakes-shockingly-realistic-204210734.html
fev21
PRE-DEEPFAKES New Hampshire faced a variation this issue 16 years ago in a story I covered extensively. It involved a summer camp for girls, whose official photographer used Photoshop to put faces of teenage campers younger than 16 onto the bodies of adult women in pornography pictures. These “morphed” pictures, which he called “personal fantasies,” were discovered by accident when he didn’t remove them from CD-ROMs of official camp photos. He was arrested and convicted of possessing child pornography.
https://granitegeek.concordmonitor.com/2021/02/14/n-h-wrestled-with-deepfake-pornography-long-before-the-tech-existed/
dez20
In 2018, Sam Cole, a reporter at Motherboard, discovered a new and disturbing corner of the internet. A Reddit user by the name of “deepfakes” was posting nonconsensual fake porn videos using an AI algorithm to swap celebrities’ faces into real porn. Cole sounded the alarm on the phenomenon, right as the technology was about to explode. A year later, deepfake porn had spread far beyond Reddit, with easily accessible apps that could “strip” clothes off any woman photographed.
https://www.technologyreview.com/2020/12/24/1015380/best-ai-deepfakes-of-2020/
sobre os Cheapfakes em 2020
https://www.technologyreview.com/2020/12/22/1015442/cheapfakes-more-political-damage-2020-election-than-deepfakes/
será correto usar a designação deepfake video, deepfake audio, etc, atendendo aos varios tipos de deepfakes existentes
tambem podemos usar synthetic media
nov20
The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent people. For those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort.https://www.biometricupdate.com/202011/deepfakes-the-times-wows-the-senate-punts-and-asia-worries + https://www.nytimes.com/interactive/2020/11/21/science/artificial-intelligence-fake-people-faces.html
nov20
UMA BOA EXPLICAÇÃO
Essentially, artificial intelligence is a form of technology that makes computers behave in ways that could be considered distinctly human, such as being able to reason and adapt. One common kind of artificial intelligence is machine learning, which focuses on using algorithms that can improve their performance through exposure to more data. Deepfakes are created using a theory of machine learning called deep learning. A deep learning program uses many layers of algorithms to create structures called neural networks, inspired by the structure of the human brain. The neural networks aim to recognize underlying relationships in a set of data. https://www.mironline.ca/what-the-rise-of-deepfakes-means-for-the-future-of-internet-policies/
out20
a voice cannot be copyrighted. Midler v. Ford Motor Co. proclaimed that ” A voice is as distinctive and personal as a face. The human voice is one of the most palpable ways identity is manifested.” The court held that not every instance of commercial usage of another’s voice is a violation of law; specifically, a person whose voice is recognizable and widely known receives protection under the law through their right of publicity as protection from invasion of privacy by appropriation. This protects public figures and celebrities from their identities being misappropriated and potentially misused from commercial gain. The right of publicity is the celebrity’s analog for the common person’s right of privacy; both are protected with different underlying motivations—one is to allow a celebrity alone to capitalize on their fame, and another is to allow a private person to remain private. https://www.ipwatchdog.com/2020/10/14/voices-copyrighting-deepfakes/id=126232/
out20
After deepfakes, a new frontier of AI trickery: fake faces. Already, fake faces have been identified in bot campaigns in China and Russia, as well as in right-wing online media and supposedly legitimate businesses. Their proliferation has raised fears that technology poses a more pervasive and pressing threat than deepfakes, as online platforms face a growing wave of disinformation ahead of the US election. Graphika and the Atlantic Council’s Digital Forensic Research Lab report on fake identities, showing telltale signs that Alfonzo Macias’ profile picture is a fake. “A year ago, it was a novelty,” tweeted Ben Nimmo, director of investigations of the intelligence group on social networks Graphika. “Now it seems like every trade we analyze tries this at least once.” https://www.usa-vision.com/after-deepfakes-a-new-frontier-of-ai-trickery-fake-faces/A website called This Person Does Not Exist publishes a near-infinite number of the fake faces online, created in real time every time you refresh the page. “A year ago, this was a novelty,” says Ben Nimmo, director of investigations at social media intelligence group Graphika. “Now it feels like every operation we analyse tries this at least once.” With GAN technology available across the web, it’s impossible to be certain that the stranger you’re talking to on Facebook or Twitter has ever existed. https://www.dailystar.co.uk/news/latest-news/ai-deepfakes-creating-fake-humans-22840694
Na sexta-feira, o canal americano NBC divulgou uma investigação que destacava as muitas perguntas sobre a identidade e as fontes do documento de 64 páginas. Por exemplo, a emissora descobriu que a pessoa que era apresentada como seu autor, um suposto analista suíço chamado Martin Aspen, não existia, que sua identidade foi inventada e sua foto produzida por softwares. https://br.noticias.yahoo.com/magnata-hong-kong-n%C3%A3o-tem-114645502.html
============================AULA TCM202/21_________________________
ORIGEM
Farid aponta, por outro lado, que as manipulações digitais de imagens não são uma novidade, pelo contrário, existem desde os anos 1990 e surgem junto com a evolução dos softwares gráficos. O que começou na forma de alterações visuais de imperfeições ou até mudanças diretas no discurso, entretanto, evoluiu para algo muito mais perigoso, enquanto a relação das pessoas com as imagens se manteve inalterada. Na visão do especialista, os indivíduos sempre estiveram cientes de que imagens podem ser manipuladas e que, ao saírem do mundo, passando através de uma câmera e, depois, por softwares como o Photoshop, elas ainda são, em maior ou menor grau, uma reprodução da realidade. O problema, segundo ele, começou quando a inteligência artificial evoluiu ao ponto de criar farsas completas, que vêm sendo utilizadas para fins nada positivos. https://canaltech.com.br/inteligencia-artificial/sociedade-pode-estar-perdendo-a-guerra-contra-os-deep-fakes-alerta-professor-172775/
Pre-deepfakes
(2014)
https://www.youtube.com/watch?v=lc9t1jNmtWc“How Dove Brought Audrey Hepburn Back to Life” (LINK)
HISTORIA set20
New “Deepfake” Can Change “Entire Person” https://communalnews.com/new-deepfake-can-change-entire-person/
set20
Deepfakes Are Bad - Deepfakes Are Good.ago20
In the past, the audiovisual has also suffered the urge to manipulate reality. Hiding certain images, showing some and not others or accompanying a scene of a certain speech with a voiceover has been common both in democracies and dictatorships. And before cinema or television became popular, the static image also played an important role. Hence it has also happened by the hands of editing and manipulation to hide and manipulate reality to the liking of one or the other. Let’s look at three examples. https://www.explica.co/the-deepfakes-of-the-20th-century-how-stalin-franco-and-walt-disney-manipulated-reality-using-primitive-methods/
jul20
nother form of AI-generated media is making headlines, one that is harder to detect and yet much more likely to become a pervasive force on the internet: deepfake text. Last month brought the introduction of GPT-3, the next frontier of generative writing: an AI that can produce shockingly human-sounding (if at times surreal) sentences. As its output becomes ever more difficult to distinguish from text produced by humans, one can imagine a future in which the vast majority of the written content we see on the internet is produced by machines. If this were to happen, how would it change the way we react to the content that surrounds us? (...) Generated media, such as deepfaked video or GPT-3 output, is different. If used maliciously, there is no unaltered original, no raw material that could be produced as a basis for comparison or evidence for a fact-check ... But synthetic text—particularly of the kind that’s now being produced—presents a more challenging frontier. It will be easy to generate in high volume, and with fewer tells to enable detection. (...) A Wall Street Journal analysis of some of these cases spotted hundreds of thousands of suspicious contributions, identified as such because they contained repeated, long sentences that were unlikely to have been composed spontaneously by different people. If these comments had been generated independently—by an AI, for instance—these manipulation campaigns would have been much harder to smoke out. In the future, deepfake videos and audiofakes may well be used to create distinct, sensational moments that commandeer a press cycle, or to distract from some other, more organic scandal. But undetectable textfakes—masked as regular chatter on Twitter, Facebook, Reddit, and the like—have the potential to be far more subtle, far more prevalent, and far more sinister. https://www.wired.com/story/ai-generated-text-is-the-scariest-deepfake-of-all/ + Forget deepfakes – we should be very worried about AI-generated text. GPT-3 has been trained using millions of pages of text drawn from the Internet and can produce highly credible human writing https://www.telegraph.co.uk/technology/2020/08/26/forget-deepfakes-ai-generated-text-should-worried/
jul20
SERÁ??? As Deepfakes Get Better, The Onus Is On Us To Determine Trustworthiness https://www.mediapost.com/publications/article/354286/as-deepfakes-get-better-the-onus-is-on-us-to-dete.html
Jun20
Émais um episódio na guerra aberta entre o presidente
norte-americano e as redes sociais. Esta sexta-feira,
o Facebook e o Twitter viram-se obrigados a retirar das suas plataformas um
vídeo publicado por Donald Trump, considerado enganoso, de acordo com a CNN. A publicação foi
feita na véspera do dia em que se celebrava o Juneteenth, o feriado mais antigo
conhecido em homenagem ao fim da escravidão nos EUA. https://www.dn.pt/mundo/abraco-ou-fuga-trump-manipulou-video-de-criancas-e-redes-sociais-apagaram-no-12334737.html
Deepfakes aren’t very good—nor are the tools to detect them https://arstechnica.com/information-technology/2020/06/deepfakes-arent-very-good-nor-are-the-tools-to-detect-them/?comments=1
AUDIO apenas https://www.predictiveanalyticsworld.com/machinelearningtimes/with-questionable-copyright-claim-jay-z-orders-deepfake-audio-parodies-off-youtube/11421/ +
https://www.youtube.com/watch?v=zBUDyntqcUY&list=PURt-fquxnij9wDnFJnpPS2Q
jun20 Más de cuatro décadas después de su muerte, Franco ha vuelto a hablar. Así ha sido, al menos, en el podcast de Spotify XRey, que repasa la vida del monarca Juan Carlos I. Haciendo uso de inteligencia artificial, la voz del dictador aparece en él recitando fragmentos que nunca fueron registrados en vida. https://hipertextual.com/2020/06/spotify-franco-deepfake-voz-podcast
jn20
Cheapfakes
Tipos
de deepfake: faceswap, deepnude e lipsync
Já naõ troca apenas humanos por humanos, mas humanos por animais https://freenews.live/ai-turns-humans-into-animals-and-animals-into-humans/
No comments:
Post a Comment