Thursday, November 18, 2021

Combater - NUNCA CONFIAR

jun23
Reasons To Doubt Political Deepfakes
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes
are perceived, and which potential motivations drive doubt in the believability of deepfakes
versus authentic videos. To better understand the audience’s perceptions of deepfakes, we ran an
online experiment (N=829) in which participants were randomly exposed to a politician’s textual
or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this
politician’s speech was forged to include a radical right-wing populist narrative. In response to
both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations
for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such
judgments. Key findings show that participants have a hard time distinguishing a deepfake from a
related authentic video, and that the deepfake’s content distance from reality is a more likely cause
for doubt than perceived technological glitches. Together, we offer new insights into news users’
abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy
interventions promoting accurate verification skills among the audience.
DOI: 10.1177/02673231231184703



out22

New research from New York University adds to the growing indications that we may soon have to take the deepfake equivalent of a ‘drunk test’ in order to authenticate ourselves, before commencing a sensitive video call – such as a work-related videoconference, or any other sensitive scenario that may attract fraudsters using real-time deepfake streaming software.

https://www.unite.ai/gotcha-a-captcha-system-for-live-deepfakes/


jan22

People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real. https://sputniknews.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.html

nov21

“How do we respond? We have to be a little skeptical,” Littman said. “We need additional proof. I think that’s where we need to get to with imagery as well—a picture’s not sufficient anymore to be convincing. I don’t think that’s new, we just have to start treating something we found very trustworthy as less trustworthy. It joins all the other stuff we have to stop taking at face value.”

https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/


nov21

DENUNCIAR É PIOR??

To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges. https://urgentcomm.com/2021/11/18/how-to-navigate-the-mitigation-of-deepfakes/

 No21

How do you mitigate such a threat? Perhaps we should consider the fundamental concepts from zero trust — never trust, always verify, and assume there's been a breach. I have been using these concepts when dealing with videos I see in different online media; they offer a more condensed version of some of the core concepts of critical thinking, such as challenging assumptions, suspending immediate judgment, and revising conclusions based on new data.

In the world of network security, assuming a breach means you must assume the attacker is already in your network. The attacker might have gotten in via a vulnerability that already has been patched but was able to establish persistency on the network. Maybe it is an insider threat — intentionally or not. You need to assume there is malicious activity conducted covertly on your network. https://www.darkreading.com/attacks-breaches/how-to-navigate-the-mitigation-of-deepfakes

Monday, November 15, 2021

The pervert’s dilemma

jun23

APRIL 10 WAS a very bad day in the life of celebrity gamer and YouTuber Atrioc (Brandon Ewing). Ewing was broadcasting one of his usual Twitch livestreams when his browser window was accidentally exposed to his audience. During those few moments, viewers were suddenly face-to-face with what appeared to be deepfake porn videos featuring female YouTubers and gamers QTCinderella and Pokimane—colleagues and, to my understanding, Ewing’s friends. Moments later, a quick-witted viewer uploaded a screenshot of the scene to Reddit, and thus the scandal was a fact. 

https://www.wired.com/story/deepfakes-porn-philosophy-sexual-fantasy/


ma23

La convergence des deepfakes et de l’effet Mandela soulève des questions sur la confiance que nous accordons à nos souvenirs et aux médias numériques. Les deepfakes peuvent être utilisés pour altérer des événements historiques et semer la confusion parmi les masses, renforçant ainsi l’effet Mandela. La combinaison de ces deux phénomènes pourrait conduire à une société où il devient de plus en plus difficile de distinguer la réalité de la fiction.

https://www.thmmagazine.fr/deepfakes-et-effet-mandela-lart-de-la-manipulation-numerique/


mai23

Os advogados de Musk contestaram as alegações dos autores da ação com uma nova estratégia, que está preocupando não só os juízes, mas toda a comunidade jurídica dos Estados Unidos: a já chamada "defesa deepfake" (deepfake defense).

Esse é um novo tipo de defesa que consiste em alegar que um vídeo (ou um áudio) real é falsificado, porque teria sido adulterado com a tecnologia de deepfake, que é viabilizada com o uso de inteligência artificial (IA).

"Musk, como qualquer figura pública, está sujeito a muitos deepfakes, tanto de vídeos como de gravações de áudio, com o propósito de mostrar que ele disse ou fez coisas que ele nunca disse ou fez", diz a petição dos advogados de Musk.

Eles alegaram que, graças aos avanços em inteligência artificial, está mais fácil do que nunca criar imagens e vídeos de coisas que não existem ou de eventos que nunca aconteceram. A falsificação digital está sendo usada para espalhar desinformação e propaganda, personificar celebridades e políticos, manipular eleições e aplicar golpes, disseram eles.

Esses argumentos não convenceram a juíza. Ao contrário, eles são profundamente preocupantes, ela escreveu.

"A posição é de que, pelo fato de Musk ser famoso e ser um alvo de deepfakes, suas declarações públicas são imunes. Em outras palavras, Musk e outros em sua posição podem simplesmente dizer o que quiserem no domínio público, depois evitar responsabilidade com o argumento de que foram vítimas de deepfake. Esta corte não quer estabelecer um precedente por tolerar essa abordagem da Tesla".

Esse não é o primeiro caso. No julgamento de dois invasores do Congresso em 6 de janeiro de 2021, entre os quais o do réu considerado o "líder insurrecionista", Guy Reffitt, seus advogados alegaram que os vídeos usados na investigação podem ter sido criados ou manipulados por inteligência artificial. 

Ética profissional
Hany Farid, especialista em perícia forense digital e professor da Universidade da Califórnia, concorda que há um novo fenômeno preocupante: "À medida que esse tipo de tecnologia se tornar mais prevalente, ficará fácil alegar que qualquer coisa é falsificada".

https://www.conjur.com.br/2023-mai-13/ia-cria-problema-justica-eua-defesa-deepfake

 

DENUNCIAR É PIOR??

To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges. https://urgentcomm.com/2021/11/18/how-to-navigate-the-mitigation-of-deepfakes/


There is certainly much to be said about Deepfakes, both

from a political, legal and ethical point of view. In this essay,

however, I shall focus only on a specific moral dilemma

that arises from the phenomenon, which I shall refer to as

the pervert’s dilemma, for lack of a better term.1 Although

Deepfake pornography (henceforth just “Deepfakes”) strikes

most people as intuitively disturbing and immoral—recall

that several sites (e.g., Reddit, Pornhub, etc.) pre-emptively

banned Deepfakes—it seems difficult to justify this intuition

without simultaneously disapproving of other actions not

normally considered harmful. For instance, we may again

compare Deepfakes to sexual fantasies. Both fantasies and

Deepfakes are arguably no more than a virtual image generated

by informational input that is publicly available, and

thus it is hard to identify a quality that makes the former

more permissible than the latter. Yet, although certain sexual

fantasies can be deemed impermissible due to the grotesque

or violent nature of their content (more on this below), they

are not normally considered unethical per se."


Herein lies the dilemma, which can now be can be

fully articulated thus:

1. Creating pornographic Deepfake videos based on someone’s

face (without their explicit consent) is morally

impermissible.

2. Having private sexual fantasies about someone (without

their explicit consent) is per se normally morally permissible.

3. Under conditions (i) and (ii), there is no morally relevant

difference between creating a Deepfake video based on

someone’s face and having a private sexual fantasy

about someone.


My analysis suggests that when the pervert’s dilemma is considered on a high LoA—i.e., as isolated cases unrelated to other processes in society—there is no reason why Deepfakes should be deemed more morally impermissible than sexual fantasies.

However, when the dilemma is considered on a low LoA— i.e., when we consider the truly morally relevant information— the Deepfake phenomenon can be considered morally impermissible on the basis of its role in gender inequality.

The consumption of Deepfakes is undeniably a highly gendered phenomenon, and arguably plays a role in the social degradation of women in society. Sexual fantasies are not.

Ethics and Information Technology (2020) 22:133–140

https://doi.org/10.1007/s10676-019-09522-1

ORIGINAL PAPER

Introducing the pervert’s dilemma: a contribution to the critique

of Deepfake Pornography

Carl Öhman1

Published online: 19 November 2019

© The Author(s) 2019

Thursday, November 4, 2021

COMO DEFENDER DE UM DF

ma22

Performing artists push for copyright protection from AI deepfakes

https://www.reuters.com/legal/litigation/performing-artists-push-copyright-protection-ai-deepfakes-2022-05-18/


DENUNCIAR É PIOR??

To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges. https://urgentcomm.com/2021/11/18/how-to-navigate-the-mitigation-of-deepfakes/

nov21
Instagram account taken over by imposter who posted deepfake video of Tampa man. LINK
nov21
If someone makes a deepfake of Boris Johnson using inflammatory language, the Prime Minister has the channels and reach to refute it. You and I do not have that luxury. If someone makes a deepfake of us doing the same, and releases it on Twitter at 9am, we could lose our friends and jobs by noon. A false accusation on Twitter will travel around the world a thousand times in the time it takes the truth to travel a mile. And even if the belated truth does manage to make it around the world, it will never reach everyone who last knew you for the “you” in the deepfake. Video doesn’t lie, right? So why follow updates about the event any more? https://www.telegraph.co.uk/news/2021/10/30/deepfaked-committing-crime-should-worried/ 

Tuesday, November 2, 2021

LIVROS

 fev22

MUSEU

The film now serves as the centerpiece of Deepfake: Unstable Evidence on Screen, a new exhibit at the Museum of the Moving Image that explores and contextualizes deepfakes: synthetic media videos in which a real-life person appears to say or do something they haven’t actually said or done, fashioned with the use of artificial intelligence. (...) “We use Nixon's resignation speech as the original video that then gets manipulated,” said co-director Panetta. “The emotion in Nixon's face, all of the original body language, the page turning: all of that really is real. But we have overlaid it, manipulated it, with another very emotional speech”  specifically, an actual script written for the president by William Safire, in case the astronauts died before they could return to Earth. https://gothamist.com/arts-entertainment/exhibit-how-scary-are-deepfakes

At the Museum of the Moving Image in Astoria, Queens, the film, presented on an older model television set in a period-appropriate living room, serves as the centerpiece of a fascinating, timely, and unsettling exhibition “Deepfake: Unstable Evidence on Screen.” The show explores the phenomenon of “deepfake” videos, which use advanced artificial intelligence and machine learning to create deceptive content, and how they are used to manipulate audiences and perpetuate misinformation or propaganda.

https://news.artnet.com/art-world/deepfakes-museum-of-the-moving-image-2068655


Avatars, Activism and Postdigital PerformancePrecarious Intermedial Identities


Trust No One: Inside the World of Deepfakes by Michael Grothaus (RRP £18.99). Buy now for £16.99 at books.telegraph.co.uk or call 0844 871 1514