Monday, December 21, 2020

Combater com humanos

feb24

As online consumers of information, how do we play a role in the perpetuation and proliferation of deepfakes? What sorts of responsibilities do we have, as the problem of misinformation and disinformation gets worse?

As consumers, we have to consider things like, who is the source of this video, and whether the content is at all plausible given what one knows about the world, and so on. But you also have to ask, what can I as a consumer of information do to make sure that I’m not misled. If you’re going to be a critical consumer of information, then you have to be aware that people can lie to you; they can write false information. In essence: you can’t take videos and photographs as the gold standard, because they’re now just as susceptible to manipulation as anything else. 

https://news.northeastern.edu/2024/02/12/magazine/ai-deepfake-images-online-deception/

dez23

Quando o fotógrafo alemão Boris Eldagsen submeteu a concurso uma imagem supostamente fotográfica que acabou galardoada pelo Sony World Photography Awards , colocou a nu um problema que ultrapassa o da mera distinção formal entre o que é da esfera da fotografia e o que é da arte gráfica. Os atributos de verosimilhança tradicionalmente associados à fotografia foram transpostos para a imagem gerada por IA, evidenciando a facilidade com que até um painel composto por peritos em fotografia pode ser enganado. Existem, em imagens geradas por inteligência artificial, características que poderão denunciar a sua proveniência. As mais evidentes prendem-se com a presença de elementos ou situações que em muito diferem das que existem no mundo real – a presença de um animal selvagem de grandes dimensões numa área densamente urbanizada, por exemplo, pode levantar suspeitas. Luz, sombra ou reflexos que desafiam as leis da física são uma inconsistência comum nas imagens produzidas por IA, assim como distorção ou “interrupção” de elementos que a compõem. A repetição de padrões é também um dos “sintomas” da ainda imperfeita máquina geradora de imagens IA.

https://www.publico.pt/2023/12/12/p3/noticia/fotografia-imagem-gerada-ia-aprende-distinguir-2072804?utm_source=notifications&utm_medium=web&utm_campaign=2072804


os deepfakes piscam os olhos?

Deepfake technology has emerged as a major concern in the age of digital manipulation, raising the need for

effective detection methods. This paper proposes a novel approach to detect deepfakes by leveraging the

physiological response of human eye blinking. Deep fakes, which involve the manipulation of facial features and

expressions, often fail to replicate the natural blinking patterns of real individuals. Our method employs

computer vision techniques and ma chine learning algorithms to analyse the eye blinking patterns of subjects

from video clips and images. We extract key features related to blink frequency. duration, and synchronization

with other facial movements. An extensive dataset comprising real and deepfake videos is used for training and

evaluation, The results demonstrate the effectiveness of the proposed method in distinguishing between real

and deepfake videos. The human eye blinking-based detection approach achieved a high ac curacy rate. Deep

Vision accurately detected Deepfakes in seven out of eight types of videos (87.5% accuracy rate), suggesting we

can overcome the limitations of integrity verification algorithms performed only on the basis of pixels

DEEPFAKES DETECTION USING HUMAN EYE BLINKING

Mr. S.G. Patil*1, Umair Ansari*2, Prithvi Kamble*3, Apurva Shinde*4

*1,2,3,4Dept. Of Computer Science & Engineering, SKN Sinhgad Institute Of Technology, Lonavala, India



set23

The 2024 US presidential election is more than a year away, but we’re starting to see hints of what disinformation campaigns powered by artificial intelligence might look like. Today let’s talk about a deepfake that shook up one recent election, and consider how we should think about the role of fact-checki…

https://www.platformer.news/p/why-fact-checking-fails-to-stop-deepfakes

set23 (não resulta)

A study of people's ability to detect "deepfakes" has shown humans perform fairly poorly, even when given hints on how to identify video-based deceit.

https://phys.org/news/2023-09-humans-easily-deepfakes.html


mai23

Turgal said there are a few things you can watch out for including unnatural eye movement, lack of blinking, pixelation, or a misaligned background.

https://www.lex18.com/how-to-tell-the-difference-between-a-deepfake-video-and-a-real-one


jan23

Speech deepfakes are artificial voices generated by machine learning models. Previous literature

has highlighted deepfakes as one of the biggest threats to security arising from progress in AI due

to their potential for misuse. However, studies investigating human detection capabilities are

limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to

identify the deepfakes. We ran our experiments in English and Mandarin to understand if

language affects detection performance and decision-making rationale. Detection capability is

unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no

difference in detectability between the two languages. Increasing listener awareness by providing

examples of speech deepfakes only improves results slightly. The difficulty of detecting speech

deepfakes confirms their potential for misuse and signals that defenses against this threat are

needed.

Warning: Humans Cannot Reliably Detect Speech Deepfakes

Kimberly T. Mai1,2, Sergi Bray1,2, Toby Davies1, and Lewis D. Griffin2

1Department of Security and Crime Science, University College London

2Department of Computer Science, University College London


out22


“A researcher called Siwei Lyu discovered that deepfakes don't blink,” said Stamm. “He published a paper, and within a week, we started seeing deepfakes in the wild that were blinking. People had [already] figured out how to model this behaviour.”

https://www.itpro.co.uk/security/369243/real-time-deepfakes-are-becoming-a-serious-threat


mar22

Deepfake Training Only Improves Detection 10%


https://securityboulevard.com/2022/03/deepfake-training-only-improves-detection-10/


jan22

Movie director James Cameron says he hopes critical thinking will help people identify deepfake videos. So-called deepfakes use machine learning to modify video footage, usually replacing one person’s face with another, with realistic results. https://timesnewsexpress.com/news/tech/director-james-cameron-on-the-dangers-of-deepfakes/

jan22

As the technology continues to improve and fake videos proliferate, there is uncertainty

about how people will discern genuine from manipulated videos, and how this will affect

trust in online content. This paper conducts a pair of experiments aimed at gauging

the public's ability to detect deepfakes from ordinary videos, and the extent to which

content warnings improve detection of inauthentic videos. In the first experiment, we

consider capacity for detection in natural environments: that is, do people spot deep-

fakes when they encounter them without a content warning? In the second experiment,

we present the first evaluation of how warning labels affect capacity for detection, by

telling participants at least one of the videos they are to see is a deepfake and observing

the proportion of respondents who correctly identify the altered content. Our results

show that, without a warning, individuals are no more likely to notice anything out of

the ordinary when exposed to a deepfake video of neutral content (32.9%), compared

to a control group who viewed only authentic videos (34.1%). Second, warning labels

improve capacity for detection from 10.7% to 21.6%; while this is a substantial increase,

the overwhelming majority of respondents who receive the warning are still unable to

tell a deepfake from an unaltered video. A likely implication of this is that individuals,

lacking capacity to manually detect deepfakes, will need to rely on the policies set by

governments and technology companies around content moderation.

Do Content Warnings Help People Spot a Deepfake? Evidence from Two

Experiments (2022)


jan22

“The fascinating thing about deepfake manipulation compared to other forms of manipulation is that it involves faces. And humans — even babies — tend to be really good at identifying faces,” he told me over the phone. So he and other researchers put together a series of media snippets for humans on the internet to judge. Try it out for yourself! The research found that people are better than AI — but still not very good — at telling deepfakes from genuine videos. When 882 participants were shown side-by-side videos (one real, one deepfake), 82 percent outperformed the winning AI model. Way to go humans! Interestingly, the research participants’ scores didn’t improve with more practice or more time spent. In a more challenging task, participants viewed a single video and guessed whether it was a deepfake or not, moving a slider to report their response ranging from “100% confidence this is NOT a DeepFake” to “100% confidence this is a DeepFake.” Here, the results were different. Only some people, between 13 percent and 37 percent, performed better than the leading AI model. https://www.inputmag.com/tech/are-humans-better-than-ai-at-detecting-deepfakes jan22

People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real. https://sputniknews.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.html

jan22

The recent emergence of deepfake videos raises theoretical and practical questions. Are humans or the leading machine learning model more capable of detecting algorithmic visual manipulations of videos? How should content moderation systems be designed to detect and flag video-based misinformation? We present data showing that ordinary humans perform in the range of the leading machine learning model on a large set of minimal context videos. While we find that a system integrating human and model predictions is more accurate than either humans or the model alone, we show inaccurate model predictions often lead humans to incorrectly update their responses. Finally, we demonstrate that specialized face processing and the ability to consider context may specially equip humans for deepfake detection https://www.pnas.org/content/119/1/e2110013119


 dez20

Ultimately, however, our crisis of information will not be for technology to solve alone. Any technological solutions will be useless unless we humans are able to adapt to this new environment where fake media is commonplace. This will require “inoculation” through digital-literacy and awareness training, but it will also require proactive countermeasures, including cogent policy responses from the government, military and civil groups. It is only with the full mobilisation of society that we will be able to build an overarching resilience to withstand the risks presented by our compromised information ecosystem. Nina Schick is a broadcaster and author of Deep Fakes: The Coming Infocalypse LINK


Finally, all the experts agree that the public needs greater media literacy. “There is a difference between proving that a real thing is real and actually having the general public believe that the real thing is real,” Ovadya says. He says people need to be aware that falsifying content and casting doubt on the veracity of content are both tactics that can be used to intentionally sow confusion.

https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/

No comments:

Post a Comment