Tuesday, January 4, 2022

Porque funcionam

1gos23

Humans are able to detect artificially generated speech only 73% of the time, a study has found, with the same levels of accuracy found in English and Mandarin speakers.

Researchers at University College London used a text-to-speech algorithm trained on two publicly available datasets, one in English and the other in Mandarin, to generate 50 deepfake speech samples in each language.

Deepfakes, a form of generative artificial intelligence, are synthetic media that is created to resemble a real person’s voice or the likeness of their appearance.

The sound samples were played for 529 participants to see whether they could detect the real sample from fake speech. The participants were able to identify fake speech only 73% of the time. This number improved slightly after participants received training to recognise aspects of deepfake speech.

https://www.theguardian.com/technology/2023/aug/02/humans-can-detect-deepfake-speech-only-73-of-the-time-study-finds

jul23

FUNCIONAN?

Recently, a team of researchers undertook a study about deepfake videos and determined that deepfake clips of movie remakes that don’t actually exist influenced participants to falsely remember the non-existent films.

However, simple text descriptions of the fake movies also prompted similar false memory rates, the study states.

https://interestingengineering.com/culture/deepfake-movies-distorting-memory-text-descriptions

dez22

Artificial Intelligence (AI)-powered Deepfakes are responsible for new challenges in consumers’ visual experience, and pose a wide range of negative consequences (i.e., non-consensual intimate imagery, political dis/misinformation, financial fraud, and cybersecurity issues) for individuals, societies, and organizations. Research suggested legislation, corporate policies, anti-Deepfake technology, education, and training to combat Deepfakes including the usage of synthetic media to raise awareness so that people can become more critical in detection when evaluating these contents in the future. To educate and raise awareness among the college-going students, this pilot survey study utilized both synthetic and real images over undergraduate students (N=19) to understand the human cognition and perception demonstrated by the literate population in detecting Deepfake media with their bare eyes. The results showed that human cognition and perception are insufficient in detecting synthetic media with their inexperienced eyes and even the intelligent population is vulnerable to this technology. While Deepfakes are becoming sophisticated and imperceptible, it was observed that this kind of survey study can be beneficial in raising awareness among the population about the societal impact of the technology and may also improve their detection ability for future encounters.

https://ieeexplore.ieee.org/abstract/document/9965697


nov22

Remember Will Smith's brilliant performance in that remake of The Matrix? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.

The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.

Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.

https://www.irishexaminer.com/news/arid-41006312.html


Ag22

Deepfake detection loses accuracy somewhere between your brain and your mouth

A team of neuroscientists researching deepfakes at the University of Sydney have found that people seem to be able to identify them at a subconscious level more frequently than at a conscious one.

A new paper titled ‘Are you for real? Decoding realistic AI-generated faces from neural activity’ describes how brain activity reflects whether a presented image is real or fake more accurately than simply asking the person.

The researchers used electroencephalography (EEG) signals to measure the neurological responses of subjects, and found a consistent neural response associated with faces for approximately 170 milliseconds. When the face was real, and only then, the response was sustained beyond 400ms.

https://www.biometricupdate.com/202208/deepfake-detection-loses-accuracy-somewhere-between-your-brain-and-your-mouth

fev22

A new study published in the Proceedings of the National Academy of Sciences USA provides a measure of how far the technology has progressed. The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article. “We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that “these faces could be highly effective when used for nefarious purposes.” “We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper. The tools used to generate the study’s still images are already generally accessible. And although creating equally sophisticated video is more challenging, tools for it will probably soon be within general reach, Didyk contends.  (Humans Find AI-Generated Faces More Trustworthy Than the Real Thing. Viewers struggle to distinguish images of sophisticated machine-generated faces from actual humans) https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/


jan22

As the technology continues to improve and fake videos proliferate, there is uncertainty

about how people will discern genuine from manipulated videos, and how this will affect

trust in online content. This paper conducts a pair of experiments aimed at gauging

the public's ability to detect deepfakes from ordinary videos, and the extent to which

content warnings improve detection of inauthentic videos. In the first experiment, we

consider capacity for detection in natural environments: that is, do people spot deep-

fakes when they encounter them without a content warning? In the second experiment,

we present the first evaluation of how warning labels affect capacity for detection, by

telling participants at least one of the videos they are to see is a deepfake and observing

the proportion of respondents who correctly identify the altered content. Our results

show that, without a warning, individuals are no more likely to notice anything out of

the ordinary when exposed to a deepfake video of neutral content (32.9%), compared

to a control group who viewed only authentic videos (34.1%). Second, warning labels

improve capacity for detection from 10.7% to 21.6%; while this is a substantial increase,

the overwhelming majority of respondents who receive the warning are still unable to

tell a deepfake from an unaltered video. A likely implication of this is that individuals,

lacking capacity to manually detect deepfakes, will need to rely on the policies set by

governments and technology companies around content moderation.

Do Content Warnings Help People Spot a Deepfake? Evidence from Two

Experiments (2022)


jan22

A research project is to examine whether deepfakes – AI-manipulated images, videos or audio – make courts less likely to trust evidence of human-rights violations gathered on mobile phones. A Swansea legal expert has been awarded €1.5 million to examine how public perceptions of deepfakes affect trust in user-generated evidence. https://www.lawsociety.ie/gazette/top-stories/2021/12-december/study-to-probe-how-deepfakes-undermine-video-evidence-credibility

JAn22

Individual Deep Fake Recognition Skills are Affected by Viewers’ Political Orientation, Agreement with Content and Device Used. 

Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills. In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS). Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers.

(em arquivo DF)

No comments:

Post a Comment