Thursday, November 18, 2021

Combater - NUNCA CONFIAR

jun23
Reasons To Doubt Political Deepfakes
Although deepfakes are conventionally regarded as dangerous, we know little about how deepfakes
are perceived, and which potential motivations drive doubt in the believability of deepfakes
versus authentic videos. To better understand the audience’s perceptions of deepfakes, we ran an
online experiment (N=829) in which participants were randomly exposed to a politician’s textual
or audio-visual authentic speech or a textual or audio-visual manipulation (a deepfake) where this
politician’s speech was forged to include a radical right-wing populist narrative. In response to
both textual disinformation and deepfakes, we inductively assessed (1) the perceived motivations
for expressed doubt and uncertainty in response to disinformation and (2) the accuracy of such
judgments. Key findings show that participants have a hard time distinguishing a deepfake from a
related authentic video, and that the deepfake’s content distance from reality is a more likely cause
for doubt than perceived technological glitches. Together, we offer new insights into news users’
abilities to distinguish deepfakes from authentic news, which may inform (targeted) media literacy
interventions promoting accurate verification skills among the audience.
DOI: 10.1177/02673231231184703



out22

New research from New York University adds to the growing indications that we may soon have to take the deepfake equivalent of a ‘drunk test’ in order to authenticate ourselves, before commencing a sensitive video call – such as a work-related videoconference, or any other sensitive scenario that may attract fraudsters using real-time deepfake streaming software.

https://www.unite.ai/gotcha-a-captcha-system-for-live-deepfakes/


jan22

People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real. https://sputniknews.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.html

nov21

“How do we respond? We have to be a little skeptical,” Littman said. “We need additional proof. I think that’s where we need to get to with imagery as well—a picture’s not sufficient anymore to be convincing. I don’t think that’s new, we just have to start treating something we found very trustworthy as less trustworthy. It joins all the other stuff we have to stop taking at face value.”

https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/


nov21

DENUNCIAR É PIOR??

To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges. https://urgentcomm.com/2021/11/18/how-to-navigate-the-mitigation-of-deepfakes/

 No21

How do you mitigate such a threat? Perhaps we should consider the fundamental concepts from zero trust — never trust, always verify, and assume there's been a breach. I have been using these concepts when dealing with videos I see in different online media; they offer a more condensed version of some of the core concepts of critical thinking, such as challenging assumptions, suspending immediate judgment, and revising conclusions based on new data.

In the world of network security, assuming a breach means you must assume the attacker is already in your network. The attacker might have gotten in via a vulnerability that already has been patched but was able to establish persistency on the network. Maybe it is an insider threat — intentionally or not. You need to assume there is malicious activity conducted covertly on your network. https://www.darkreading.com/attacks-breaches/how-to-navigate-the-mitigation-of-deepfakes

No comments:

Post a Comment