Friday, March 20, 2020

Combater (Fight) Geral (porque é dificil combater)

mar24

In the long term, deepfakes may eventually become indistinguishable from real imagery. When that day comes, we can no longer rely on detection as a strategy. So, what do we have left that AI cannot deepfake? Here are two things: physical reality itself and strong cryptography, which is about strongly and verifiably connecting data to a digital identity.

https://techxplore.com/news/2024-04-deepfake-wrong.html~


jan24

A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.

The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.

Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.

The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.

https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud




jan24

Popular search engines like Google and Bing are making it easy to surface nonconsensual deepfake pornography by placing it at the top of search results, NBC News reported Thursday.

These controversial deepfakes superimpose faces of real women, often celebrities, onto the bodies of adult entertainers to make them appear to be engaging in real sex. Thanks in part to advances in generative AI, there is now a burgeoning black market for deepfake porn that could be discovered through a Google search, NBC News previously reported.

NBC News uncovered the problem by turning off safe search, then combining the names of 36 female celebrities with obvious search terms like "deepfakes," "deepfake porn," and "fake nudes." Bing generated links to deepfake videos in top results 35 times, while Google did so 34 times. Bing also surfaced "fake nude photos of former teen Disney Channel female actors" using images where actors appear to be underaged.

A Google spokesperson told NBC that the tech giant understands "how distressing this content can be for people affected by it" and is "actively working to bring more protections to Search."
https://arstechnica.com/tech-policy/2024/01/report-deepfake-porn-consistently-found-atop-google-bing-search-results/



nov23

There is a small academic field, called media forensics, that seeks to combat these fakes. But it is “fighting a losing battle,” a leading researcher, Hany Farid, has warned. Last year, Farid published a paper with the psychologist Sophie J. Nightingale showing that an artificial neural network is able to concoct faces that neither humans nor computers can identify as simulated. Ominously, people found those synthetic faces to be trustworthy; in fact, we trust the “average” faces that A.I. generates more than the irregular ones that nature does.

https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review


nov23

Developers of artificial intelligence platforms could soon release technology that allows users to make images and videos that would be nearly indistinguishable from reality.

Companies such as OpenAI, the developer behind the popular ChatGPT platform, and other AI companies are nearing the release of tools that will allow the creation of widespread and realistic fake videos as early as next year, according to a report from Axios.

https://www.foxnews.com/us/deepfakes-indistinguishable-reality-2024-report-warns


set23 (não resulta)

A study of people's ability to detect "deepfakes" has shown humans perform fairly poorly, even when given hints on how to identify video-based deceit.

https://phys.org/news/2023-09-humans-easily-deepfakes.html

agos23

Humans are able to detect artificially generated speech only 73% of the time, a study has found, with the same levels of accuracy found in English and Mandarin speakers.

Researchers at University College London used a text-to-speech algorithm trained on two publicly available datasets, one in English and the other in Mandarin, to generate 50 deepfake speech samples in each language.

Deepfakes, a form of generative artificial intelligence, are synthetic media that is created to resemble a real person’s voice or the likeness of their appearance.

The sound samples were played for 529 participants to see whether they could detect the real sample from fake speech. The participants were able to identify fake speech only 73% of the time. This number improved slightly after participants received training to recognise aspects of deepfake speech.

https://www.theguardian.com/technology/2023/aug/02/humans-can-detect-deepfake-speech-only-73-of-the-time-study-finds

https://www.popsci.com/technology/audio-deepfake-study/

mar23

Recent rapid advancements in deepfake technology have allowed

the creation of highly realistic fake media, such as video, image,

and audio. These materials pose significant challenges to human

authentication, such as impersonation, misinformation, or even a

threat to national security. To keep pace with these rapid advancements,

several deepfake detection algorithms have been proposed,

leading to an ongoing arms race between deepfake creators and

deepfake detectors. Nevertheless, these detectors are often unreliable

and frequently fail to detect deepfakes. This study highlights

the challenges they face in detecting deepfakes, including (1) the

pre-processing pipeline of artifacts and (2) the fact that generators

of new, unseen deepfake samples have not been considered when

building the defense models. Our work sheds light on the need for

further research and development in this field to create more robust

and reliable detectors.

Why Do Deepfake Detectors Fail?

Binh Le

Sungkyunkwan University

South Korea

bmle@g.skku.edu

Shahroz Tariq

CSIRO’s Data61

Australia

shahroz.tariq@data61.csiro.au

Alsharif


jan23

Two years ago, Microsoft's chief scientific officer Eric Horvitz, the co-creator of the spam email filter, began trying to solve this problem. "Within five or ten years, if we don't have this technology, most of what people will be seeing, or quite a lot of it, will be synthetic. We won't be able to tell the difference.

"Is there a way out?" Horvitz wondered.

Eventually, Microsoft and Adobe joined forces and designed a new feature called Content Credentials, which they hope will someday appear on every authentic photo and video.

Here's how it works:

Imagine you're scrolling through your social feeds. Someone sends you a picture of snow-covered pyramids, with the claim that scientists found them in Antarctica – far from Egypt! A Content Credentials icon, published with the photo, will reveal its history when clicked on.

"You can see who took it, when they took it, and where they took it, and the edits that were made," said Rao. With no verification icon, the user could conclude, "I think this person may be trying to fool me!"

https://www.cbsnews.com/news/creating-a-lie-detector-for-deepfakes-artificial-intelligence/


jan23

Speech deepfakes are artificial voices generated by machine learning models. Previous literature

has highlighted deepfakes as one of the biggest threats to security arising from progress in AI due

to their potential for misuse. However, studies investigating human detection capabilities are

limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to

identify the deepfakes. We ran our experiments in English and Mandarin to understand if

language affects detection performance and decision-making rationale. Detection capability is

unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no

difference in detectability between the two languages. Increasing listener awareness by providing

examples of speech deepfakes only improves results slightly. The difficulty of detecting speech

deepfakes confirms their potential for misuse and signals that defenses against this threat are

needed.

Warning: Humans Cannot Reliably Detect Speech Deepfakes

Kimberly T. Mai1,2, Sergi Bray1,2, Toby Davies1, and Lewis D. Griffin2

1Department of Security and Crime Science, University College London

2Department of Computer Science, University College London




nov22

Remember Will Smith's brilliant performance in that remake of The Matrix? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.

The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.

Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.

https://www.irishexaminer.com/news/arid-41006312.html


Ag22

Deepfake detection loses accuracy somewhere between your brain and your mouth

A team of neuroscientists researching deepfakes at the University of Sydney have found that people seem to be able to identify them at a subconscious level more frequently than at a conscious one.

A new paper titled ‘Are you for real? Decoding realistic AI-generated faces from neural activity’ describes how brain activity reflects whether a presented image is real or fake more accurately than simply asking the person.

The researchers used electroencephalography (EEG) signals to measure the neurological responses of subjects, and found a consistent neural response associated with faces for approximately 170 milliseconds. When the face was real, and only then, the response was sustained beyond 400ms.

https://www.biometricupdate.com/202208/deepfake-detection-loses-accuracy-somewhere-between-your-brain-and-your-mouth

jul22

While observers can't consciously recognise the difference between real and fake faces, their brains can, University of Sydney research shows.

When it comes to deepfakes, what if fact could be distinguished from fraud? University of Sydney neuroscientists have discovered a new way: they have found that people’s brains can detect AI-generated fake faces, even though people could not report which faces were real and which were fake.

Deepfake videos, images, audio, or text appear to be authentic, but in fact are computer generated clones designed to mislead you and sway public opinion. They are the new foot soldiers in the spread of disinformation and are rife – they appear in political, cybersecurity, counterfeiting, and border control realms.

https://www.sydney.edu.au/news-opinion/news/2022/07/11/your-brain-is-better-at-busting-deepfakes-than-you-.html


jun22

Deepfaked Online Content is Highly Effective in Manipulating Attitudes & Intentions

Disinformation has spread rapidly through social media and news sites, biasing our (moral) judgements of individuals and groups. “Deepfakes”, a new type of AI-generated media, represent a powerful tool for spreading disinformation online. Although they may appear genuine, Deepfakes are hyper-realistic fabrications that enable one to digitally control another person’s appearance and actions. Across five studies (N = 2033) we examined the psychological impact of Deepfakes on viewers. Participants were exposed to either genuine or Deepfaked online content, after which their (implicit) attitudes and sharing intentions were measured. We found that Deepfakes quickly and effectively allow their creators to manipulate public perceptions of a target in both positive and negative directions. Many are unaware that Deepfaking is possible, find it difficult to detect when they are being exposed to it, and neither awareness nor detection serves to protect them from its influence

(estudo)


jun22

Deepfakes: How easy are they to make and detect?

ISACA Redstone Arsenal

May 16, 2022

Catherine Bernaciak, PhD

Shannon Gallagher, PhD

Dominic Ross

(está nos apoios)

ma22

We present a real-world use case for spoofing voice authentication in a customer care call center. Based on this scenario, we evaluate the feasibility of attacking such a system and create an attacker profile. For this purpose, we examine three available speech synthesis tools and discuss their usability. We use these tools and acquired knowledge to generate a dataset including deepfake speech and assess the resilience of voice biometrics systems against deepfakes. 

https://dl.acm.org/doi/abs/10.1145/3477314.3507013


ab22

dicas para combater

https://www.latestly.com/socially/social-viral/fact-check/7-you-can-identify-deepfake-videos-with-the-same-process-take-a-screenshot-from-the-latest-tweet-by-snopes-com-3545724.html







mar22

According to experts interviewed for the study, the amount of manipulated audiovisual content will grow exponentially as sophisticated deepfake technology is likely to become accessible to the general public within the next two to three years. Although the technology could have positive applications for democracy, e.g., in creating satire, the report warns that deepfakes have the ability to cause major societal harms. Media and journalists could become hesitant to use video evidence if they have to check all content for authenticity; legal proceedings may require longer investigations in order to rule out fabricated evidence; elections can be disturbed by fake clips discrediting political opponents; and the growth of deepfake pornography could negatively impact the position of women in society.

https://merlin.obs.coe.int/article/9415


mar22

Scientists at the MIT Media Lab showed almost 6,000 people 16 authentic political speeches and 16 that were doctored by AI. The soundbites were presented in permutations of text, video, and audio, such as video with subtitles or only text. The participants were told that half of the content was fake, and asked which snippets they believed were fabricated. When shown text alone, the respondents were only barely better at identifying falsehoods (57% accuracy) than random guessing. They were a bit more accurate when given video with subtitles (66%), and far more successful when shown both video and audio (82%). The study authors said the participants relied more on how something was said than the speech content itself:

The finding that fabricated videos of political speeches are easier to discern than fabricated text transcripts highlights the need to re-introduce and explain the oft-forgotten second half of the ‘seeing is believing’ adage.

https://thenextweb.com/news/deepfakes-study-finds-doctored-text-is-more-manipulative-than-phony-video-mit-media-lab

Porém, os voluntários apresentaram um índice de precisão bem pior ao analisarem textos puros: a taxa de acerto se fixou em 57% das vezes, mostrando que dependendo das circunstâncias, mentir de forma escrita continua sendo muito mais eficaz do que criar vídeos falsos de personalidades. E nem é preciso contar com uma IA para redigir informações falsas. https://tecnoblog.net/meiobit/457628/textos-falsos-mais-perigosos-deepfake-ferramentas-ia/


jan22

As the technology continues to improve and fake videos proliferate, there is uncertainty

about how people will discern genuine from manipulated videos, and how this will affect

trust in online content. This paper conducts a pair of experiments aimed at gauging

the public's ability to detect deepfakes from ordinary videos, and the extent to which

content warnings improve detection of inauthentic videos. In the first experiment, we

consider capacity for detection in natural environments: that is, do people spot deep-

fakes when they encounter them without a content warning? In the second experiment,

we present the first evaluation of how warning labels affect capacity for detection, by

telling participants at least one of the videos they are to see is a deepfake and observing

the proportion of respondents who correctly identify the altered content. Our results

show that, without a warning, individuals are no more likely to notice anything out of

the ordinary when exposed to a deepfake video of neutral content (32.9%), compared

to a control group who viewed only authentic videos (34.1%). Second, warning labels

improve capacity for detection from 10.7% to 21.6%; while this is a substantial increase,

the overwhelming majority of respondents who receive the warning are still unable to

tell a deepfake from an unaltered video. A likely implication of this is that individuals,

lacking capacity to manually detect deepfakes, will need to rely on the policies set by

governments and technology companies around content moderation.

Do Content Warnings Help People Spot a Deepfake? Evidence from Two

Experiments (2022)


jan22
In the paper, we make a survey on state-ofthe-art deepfake generation methods, detection methods, and existing datasets. Current deepfake generation methods can be classified into face swapping and facial reenactment. Deepfake detection methods are mainly based features and machine learning methods. There are still some challenges for deepfake detection, such as progress on deepfake generation, lack of high quality datasets and benchmark. Future trends on deepfake detection can be efficient, robust and systematical detection methods and high quality datasets.
https://link.springer.com/article/10.1007/s11042-021-11733-y

jan22

People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real. https://sputniknews.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.html


dez21

A new tool is able to distinguish a Deepfake from a real video thanks to the analysis of the corneas. With this AI nothing can fool us anymore. It has always been said that the eyes are the mirror of the soul and it seems that the proverb is right again because just because of the distinctive brightness we have, an AI is able to differentiate a real video from one with Deepfake. The program that puts on our face on the body of another. Many have done it for entertainment or to see how it would look in a superhero movie. Although others have used it to put celebrities in porn or similar things. The worst use that Deepfake has been given is as promote disinformation campaigns. Lest we be fooled, the University at Buffalo has created the AI that knows if we are facing a real video or not. It all does it through eye analysis. This tool, which has been 94% successful in its tests, does an analysis of the corneas. https://cvbj.biz/the-glitter-of-the-eyes-is-the-trick-to-detect-deepfakes-with-this-ai-technology.html


DEZ21
UM VIDEO QUE EXPLICA AS FALHAS
https://www.youtube.com/watch?v=mHE1yVITBBs


nov21


One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”
Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. However, as the events of last October highlighted, Americans must be cautious about who they depend on for determining between deepfake and reality. Governments have vested interests in defending their dominion, and will not hesitate to engage in manipulation to advance that goal: Our political authorities cannot and should not be the sole arbiter of truth. “I feel like it can’t be government oversight,” Littman said, “but it can be that people sign on to expectations or systems that make it easier to track the provenance of the information.”
https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/

=====================AULA 25112021------------------------------

DENUNCIAR É PIOR??

To make things worse, as discussed in Whitney Phillips’ “The Oxygen of Amplification,” merely reporting on false claims and fake news — with the intention of proving them baseless — amplifies the original message and helps their distribution to the masses. And now we have technology that allows us to create deepfakes relatively easily, without any need for writing code. A low bar to use the tech, methods to distribute, a method of monetizing — the cybercrime-cycle pattern reemerges. https://urgentcomm.com/2021/11/18/how-to-navigate-the-mitigation-of-deepfakes/

nov21
. A false accusation on Twitter will travel around the world a thousand times in the time it takes the truth to travel a mile. And even if the belated truth does manage to make it around the world, it will never reach everyone who last knew you for the “you” in the deepfake. Video doesn’t lie, right? So why follow updates about the event any more?
Yet while deepfake technology can and is being used for horrible things, the technology itself is not evil. It will be transformative for entertainment, marketing and even health industries; already deepfake tech enables some who have lost their voice to speak again, sounding precisely like themselves.
People are the problem.
How do you solve that? There are limited steps you can take, but primarily, stop posting so much of your life on social media. Any video of you (or your spouse, parent, or child) is fodder for deepfake AI. If you can’t stop posting, at least limit the audience for the videos you post. https://www.telegraph.co.uk/news/2021/10/30/deepfaked-committing-crime-should-worried/


TUTORIAL (sobre como se fazem e os riscos que apresentam)
https://www.youtube.com/watch?v=ccI0SCA04X4

out21

When photojournalist Jonas Bendiksen released a book about the fake news industry, he got messages from people thanking him for covering such an important issue. But he says no-one noticed one thing about the book: everything in it was also fake.  "The whole intention was for people to find this out … the problem is that didn't happen," Bendiksen told The Current's Matt Galloway. https://www.cbc.ca/radio/thecurrent/the-current-for-oct-20-2021-1.6217855/this-photojournalist-faked-an-entire-book-to-highlight-how-hard-it-is-to-spot-misinformation-1.6218251

out21

Dispelling Misconceptions and Characterizing the Failings of Deepfake Detection

https://www.computer.org/csdl/magazine/sp/5555/01/09576757/1xIKuV3DPIQ 

out21
60 Minutes CBS
https://www.cbsnews.com/news/deepfakes-real-fake-videos-60-minutes-2021-10-10/


set21
Security risks are, I would argue, the single most important near-term danger associated with the rise of artificial intelligence. It is critical that we form an effective coalition between government and the commercial sector to develop appropriate regulations and safeguards before critical vulnerabilities are introduced. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440

set21
One of the most important dangers is the emergence of deepfakes—high quality fabrications that be put to a variety of malicious uses with the potential to threaten our security or even undermine the integrity of elections and the democratic process. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440

Ian Goodfellow, who invented GANs and has devoted much of his career to studying security issues within machine learning, says he doesn’t think we will be able to know if an image is real or fake simply by “looking at the pixels.” Instead, we’ll eventually have to rely on authentication mechanisms like cybernetic signatures for photos and videos. Perhaps someday every camera and mobile phone will inject a digital signature into every piece of media it records. One startup company, Truepic, already offers an app that does this. The company’s customers include major insurance companies that rely on photographs from their customers to document the value of everything from buildings to jewelry. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440


Ago21

What you need to know about spotting deepfakes

https://venturebeat.com/2021/08/12/spotting-deepfakes/


Jul21

hat can companies do except wait for the arrival of anti-deepfake tools? There are several steps businesses can take right now to defend against deepfakes:

1. Know your enemy: Make sure your security staff have this new threat on their radar and follow deepfake-related threat intelligence closely to be prepared for the looming inflexion point.

2. Have a plan: Establish incident response workflows and escalation procedures for this kind of attack. Remember: deepfakes utilise IT in the form of AI, but they are a business-level attack. So incident response will involve top management, IT security, finance, legal, and PR teams.

3. Spread the word: Include deepfakes in security awareness campaigns to inform the workforce about this threat. Teach staff to take a step back whenever they have doubts about information they receive, even credible audio/video information. Start with targeted awareness trainings for high-risk individuals such as C-level management, middle management, and the finance department.

4. Create channels: Establish workflows so that workers know how they can verify any information that arrives via a potential deepfake message. Just like two-factor authentication has become the de-facto standard for access security, double-checking critical information must become standard procedure as we approach the age of AI-generated misinformation. Also, there should be a backup channel: if verifying some information isn’t possible at the moment (attackers usually apply time pressure: “this is urgent!”), there must be an alternative mode of communication. Workers should be equipped with easy-to-use multi-channel communication tools they can effortlessly utilise even in times of time-pressure or crisis.

5. Rethink corporate culture: Even with old-fashioned BEC, the catalyst for a successful attack often is that workers don’t dare to question orders they were – seemingly – given by their superiors. They worry that their boss might be mad if they asked for confirmation. So an effective first line of defense is to re-evaluate the corporate culture, and aim for a flatter hierarchy where it’s “not a big thing” to use common sense and call one’s superior to ask: “Did you really just video-message me to transfer amount X to account Y in obscure offshore country Z?” https://gadget.co.za/tackling-the-new-deepfake-threat-how-to-fight-an-evil-genie/ 



JUL21
Paper a ler com atenção (A partir da pag 13)
(no arquivo)Cognitive defense of the Joint Force in a digitizing world

jul21
he generated content can deceive the users into believing it is real. These fabricated contents are termed deepfakes. The common category of deepfakes is video deepfakes. The deep learning techniques, such as auto-encoders and generative adversarial network (GAN), generate near realistic digital content. The content generated poses a serious threat to the multiple dimensions of human life and civil society. This chapter provides a comprehensive discussion on deepfake generation, detection techniques, deepfake generation tools, datasets, applications, and research trends.
https://www.igi-global.com/chapter/recent-trends-in-deepfake-detection/284200 

jul21
So there is a need to develop a system that would detect and mitigate the negative impact of these AI generated videos on society. The videos that get transferred through social media are of low quality, so the detection of such videos becomes difficult. Many researchers in the past have done analysis on Deepfake detection which were based on Machine Learning, Support Vector Machine and Deep Learning based techniques such as Convolution Neural Network with or without LSTM .This paper analyses various techniques that are used by several researchers to detect Deepfake videos.
DEEPFAKE DETECTION TECHNIQUES: A REVIEW
Neeraj Guhagarkar1, Sanjana Desai2 Swanand Vaishyampayan3, Ashwini Save4


mar21

Basically, it’s relatively easy to implant false memories. Getting rid of them is the hard part. The study was conducted on 52 subjects who agreed to allow the researchers to attempt to plant a false childhood memory in their minds over several sessions. After awhile, many of the subjects began to believe the false memories. The researchers then asked the subjects’ parents to claim the false stories were true. The researchers discovered that the addition of a trusted person made it easier to both embed and remove false memories. Per the paper: "The present study therefore not only replicates and extends previous demonstrations of false memories but, crucially, documents their reversibility after the fact: Employing two ecologically valid strategies, we show that rich but false autobiographical memories can mostly be undone. Importantly, reversal was specific to false memories (i.e., did not occur for true memories)."

What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and visages of people we trust? If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact. How many law enforcement agencies in the world currently have an explicit policy against using manipulated media in the solicitation of a confession? Our guess would be: close to zero. With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos. 

https://thenextweb.com/neural/2021/03/23/how-deepfakes-could-help-implant-false-memories-in-our-minds/

 

mar21

What is being done to combat such convincing forms of misinformation?

Well, there is a cat and mouse race to also advanced technologies that can help detect deepfakes, identify them. And additionally, regulation or clarification of rules may also help because impersonation is often already illegal, particularly when it comes to public officials. And ensuring that there is accountability and discouraging people to use deepfakes for fun or advertising stuff will hopefully create a norm towards trust. Building instead of breaking trust is essential not only for the use of technology, but for societies as a whole.
https://www.gzeromedia.com/in-60-seconds/cyber/the-dangers-of-deepfakes-and-the-need-for-norms-around-trust


mar21
Results indicated that Deepfaked videos and audio have a strong psychological impact on the viewer,
and are just as effective in biasing their attitudes and intentions as genuine content.
Many people are unaware that Deepfaking is possible; find it difficult to detect when
they are being exposed to it; and most importantly, neither awareness nor detection
serves to protect people from its influence. It is also worth asking (a) if people are aware that
Deepfaking is possible, and (b) if they can detect when
they are being exposed to it. Our findings were not
encouraging: a large number of participants were
unaware that content could be Deepfaked (44%), and
even after they were told what it entailed, many were
unable to determine if what they had just encountered
was genuine or fake
Deepfaked online content is highly effective in manipulating people's attitudes and intentions

mar21
Study warns deepfakes can fool facial recognition In a paper published on the preprint server Arxiv.org, researchers at Sungkyunkwan University in Suwon, South Korea demonstrate that APIs from Microsoft and Amazon can be fooled with commonly used deepfake-generating methods. In one case, one of the APIs — Microsoft’s Azure Cognitive Services — was fooled by up to 78% of the deepfakes the coauthors fed it.
https://venturebeat.com/2021/03/05/study-warns-deepfakes-can-fool-facial-recognition/ 

fev21
 it is clear that in the current era of disinformation in which we all now reside, deepfakes represent a seriously dangerous weapon. And democracies will either have to learn to live with such lies or do their best to act quickly to preserve the truth before it irrevocably fades even further.
https://internationalbanker.com/technology/assessing-the-real-threat-posed-by-deepfake-technology/


fev21
Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs that cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
https://scitechdaily.com/computer-scientists-create-fake-videos-that-fool-state-of-the-art-deepfake-detectors/ + https://finance.yahoo.com/news/deepfake-detectors-can-be-duped-083601148.html + um grupo de cientistas da UC de San Diego alertou que, afinal, os detetores podem ser enganados. Para o demonstrar, a equipa introduziu inputs chamados “exemplos contraditórios” em cada frame de vídeo na conferência WACV 2021, que aconteceu em janeiro. https://pplware.sapo.pt/high-tech/cientistas-provam-que-detetores-de-deepfakes-podem-ser-manipulados/

jan21

New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.

Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense. https://digitalcontentnext.org/blog/2021/01/25/how-powerful-are-deepfakes-in-spreading-misinformation/

nov20

Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. (do texto Adversarial Threats to DeepFake Detection: A Practical Perspective)

nov20

There’s no question that the “hoax” ad, if aired within 30 days of an election, could plausibly satisfy all the elements of Texas’s statute. That’s a serious problem for free political expression. LINK + video


Nov20

There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.” LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent peopleFor those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINK

nov20
Yet, the question of how to detect deepfakes is increasingly difficult to answer. According to Nasir Memon, a professor of computer science at NYU: “there’s no money to be made out of detecting [fake media],” which means that few people are motivated to research ways to detect deepfakesThe implications of deepfake technology will continue to unravel as the battle between the detection and creation of realistic deep learning models plays out. Only one thing is clear: without better detection methods and more reliable policies, deepfakes pose an unprecedented threat to our perception of the media that defines our world

nov20
“Potentially dangerous technologies like deepfakes continue to stay a step ahead of the intelligence employed to detect them and the policies intended to contain them” LINK

nov20

A Nanyang Technological University, Singapore (NTU Singapore) survey of 1,231 Singaporeans has found that deepfakes are easily fooling people into thinking fake news is actually real. What's more worrying is that these AI-powered tools are also deceiving those who claim to be aware of deepfakes in the first place. https://sea.mashable.com/tech/13337/think-you-can-spot-a-deepfake-survey-proves-that-even-the-best-get-fooled + One in three who are aware of deepfakes say they have inadvertently shared them on social media.  https://www.sciencedaily.com/releases/2020/11/201124092134.htm 

nov20

Beating deepfakes requires a shift in strategy
Instead of removing these deepfake media upon discovery (a move that will only promote the narrative being pushed) authorities and online platforms must call out the misuse for what it is, thereby undermining the actors. Ultimately, this will contribute to the most pressing concern in the online arena, namely countering the toxic narratives that are breeding extremism and militant violence.  

https://www.theamericanreporter.com/beating-deepfakes-requires-a-shift-in-strategy/


out20

In Event of Moon Disaster,” produced by the MIT Center for Advanced Virtuality, combines edited archival NASA footage and an artificial intelligence-generated synthetic video of a Nixon speech, along with materials to demystify deepfakes. After our video was released, it reached nearly a million people within weeks. It was circulated on social platforms and community sites, demonstrating the potent combination of synthetic media’s capacity to fool the eyes and social media’s capacity to reach eyeballs. The numbers matched up: In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon’s synthetically altered face was real and 65 percent thought his voice was real. https://www.bostonglobe.com/2020/10/12/opinion/july-we-released-deepfake-mit-within-weeks-it-reached-nearly-million-people/

set20

Expert says detecting deepfakes almost impossible. https://www.axios.com/deepfakes-technology-misinformation-problem-71bb7f2b-5dc2-4fbd-9b56-01ad430c1a4e.html
um estudo conduzido pela Universidade da Califórnia, Berkeley, nos Estados Unidos, que concluiu que uma manipulação desse tipo pode ser, muitas vezes, indistinguível da realidade. A pesquisa foi conduzida pelo professor Hany Farid, ao lado da estudante de graduação Shruti Agarwal, e concluiu que, quando analisados os dados, há mais semelhanças do que diferenças entre uma imagem real e outra falsaA surpresa foi que, em muitos dos casos, a imagem pertencente à figura real, muitas vezes, se misturou às criadas a partir de manipulações, gerando um resultado que, se é capaz de confundir às máquinas, imagine aos seres humanos. O resultado, segundo Farid, é um combate desbalanceado, que usa elementos de desinformação e se aproveita da atual polarização gerada pelas redes sociais para criar um panorama bastante negativo no qual quem perde é a sociedade. https://canaltech.com.br/inteligencia-artificial/sociedade-pode-estar-perdendo-a-guerra-contra-os-deep-fakes-alerta-professor-172775/

set20
Popular deepfake apps are making it easier than ever to make AI-powered manipulated videos — spawning new memes, and an increased potential for abuse. https://www.businessinsider.com/ai-deepfake-apps-memes-misinformation-2020-9

set20
(o grande problema) "Experts have warned that deepfake technology is rapidly advancing at a rate far faster than the technology used to detect it, with one believing it could be too smart for humans to figure out. Commenting on the danger of deepfakes being "the stuff of dystopian science-fiction and film-makers' dreams", chief researcher for the NCC Group Jennifer Fernick said: "The concern that I would have for the enterprise is that the sophistication of existing deepfake technologies are certainly beyond most humans' threshold for being tricked by fake imagery." https://www.dailystar.co.uk/news/latest-news/deepfakes-turn-world-sci-fi-22715143


ago20
“There’s a fine line between using deepfakes for entertainment and memes, and using them for harm,” Windheim says. “In this tutorial, I’m saying, ‘This is how you make this particular deepfake.’ But the scary thing about the script is it can just be applied to make any type of deepfake you want.” https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/

ago20
(é cada vez mais dificil combater se se tornam banais...)
Windheim is part of a new group of online creators who are toying with deepfakes as the technology grows increasingly accessible and seeps into internet culture. The phenomenon is not surprising; media manipulation tools have often gained traction through play and parody. But it also raises fresh concerns about its potential for abuse. “There’s a fine line between using deepfakes for entertainment and memes, and using them for harm,” Windheim says. “In this tutorial, I’m saying, ‘This is how you make this particular deepfake.’ But the scary thing about the script is it can just be applied to make any type of deepfake you want.” Memers are making deepfakes, and things are getting weird. The rapidly increasing accessibility of the technology raises new concerns about its abuse. 
https://www.technologyreview.com/2020/08/28/1007746/ai-deepfakes-memes/


ago20
DeFaking Deepfakes: Understanding Journalists' Needs for Deepfake Detection https://www.usenix.org/conference/soups2020/presentation/sohrawardi

jul20

A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school. An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.https://www.biometricupdate.com/202007/deepfakes-some-progress-in-video-detection-but-its-back-to-the-basics-for-faked-audio


jul20
Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race. Manipulated videos are getting more sophisticated all the time—but so are the techniques that can identify them

Jun20
Facebook contest shows just how hard it is to detect deepfakes.
Facebook has revealed the winners of a competition for software that can detect deepfakes, doctored videos created using artificial intelligence. And the results show that researchers are still mostly groping in the dark when it comes to figuring out how to automatically identify them before they influence the outcome of an election or spark ethnic violence. The best algorithm in Facebook’s contest could accurately determine if a video was real or a deepfake just 65% of the time. https://fortune.com/2020/06/12/deepfake-detection-contest-facebook/ +https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/


mai20
Es muy probable que los futuros avances en la tecnología Deepfake dificulten o hagan necesario mucho tiempo y cierto entrenamiento para detectar imágenes o vídeos falsos y que sólo con la ayuda de la Inteligencia Artificial podamos detectar estas falsificaciones. https://online.urjc.es/es/blog/item/703-deepfakes-las-falsificaciones-audiovisuales-una-nueva-arma-al-servicio-de-la-desinformacion


Mai20 Because new security measures consistently catch many deepfake images and videos, people may be lulled into a false sense of security and believe we have the situation under control. Unfortunately, that might be further from the truth than we realize. “Deepfakes will get only easier to generate and harder to detect as computers become more powerful and as learning algorithms get more sophisticated.  Deepfakes are the coronavirus of machine learning,” said Professor Bart Kosko in the Ming Hsieh Department of Electrical and Computer Engineering. https://viterbischool.usc.edu/news/2020/05/fooling-deepfake-detectors/


Ab20

(TRIBUNAIS) Jay-Z's company has taken action against a YouTube channel which created 'deepfake' videos that mimic the rapper's voice. Roc Nation filed takedown notices against two videos created by Vocal Synthesis that use 'artificial intelligence to make [Jay-Z] rap Billy Joel’s We Didn’t Start the Fire and Hamlet’s “To be or not to be” soliloquy,' The Guardian reported on Wednesday. Although the two videos were initially taken down, as of Wednesday they were back up on the video streaming site.  https://www.dailymail.co.uk/tvshowbiz/article-8270987/Jay-Z-takes-legal-action-against-deepfakes-likeness-rapping-Billy-Joel-Hamlet.html
(sem redes sociais/plataformas tecnológicas)

Mar20 problemas a combater
Unfortunately, there is no sustainable way to detect fake images in real time and at scale. This sobering fact will likely not change anytime soon.

There are several reasons for this. First, almost all metadata is lost, stripped or altered as an image travels through the internet. By the time that image hits a detection system, it will be impossible to reproduce lost metadata - and therefore details like the original date, time, and location of an image will likely remain unknown. https://www.weforum.org/agenda/2020/03/how-to-make-better-decisions-in-the-deepfake-era/


AB20 In a paper published this week on the preprint server Arxiv.org, researchers from Google and the University of California at Berkeley demonstrate that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models. Their work follows that of a team of researchers at the University of California at San Diego, who recently demonstrated that it’s possible to bypass fake video detectors by adversarially modifying — specifically, by injecting information into each frame — videos synthesized using existing AI generation methods.
https://venturebeat.com/2020/04/08/researchers-fool-deepfake-detectors-into-classifying-fake-images-as-real/

Mar20: Who's Responsible For Combatting Deepfakes In The 2020 Election? Lawmakers and tech leaders have a serious puzzle to solve. While freedom of speech is crucial, the law doesn’t tolerate hate speech, threatening speech or slander. When people spread lies and fake depictions of political figures via social media platforms with the intention of misinforming the public, whose job is it to police this material, and can it be policed? https://www.forbes.com/sites/forbestechcouncil/2020/03/04/whos-responsible-for-combatting-deepfakes-in-the-2020-election/#6b72ba4e1c05
Mar20Vídeos manipulados utilizando a inteligência artificial são encontrados cada vez mais na internet. Porém a tecnologia traz riscos para a sociedade. Engenheiros suíços desenvolvem tecnologias para detectar e combater as manipulações na rede. https://www.swissinfo.ch/por/manipula%C3%A7%C3%B5es-na-m%C3%ADdia_como-pesquisadores-su%C3%AD%C3%A7os-tentam-reconhecer--fake-news-/45598306

Uma batalha perdida?
3/11/19 “Ultimately I think it’s a losing battle,” Allibhai said. “The whole nature of this technology is built as an adversarial network where one tries to create a fake and the other tries to detect a fake. The core component is trying to get machine learning to improve all the time…Ultimately it will circumvent detection tools.” (LINK)

REGULATIONS: A LOST CAUSE? https://www.govtech.com/products/Deepfakes-The-Next-Big-Threat-to-American-Democracy.html


Combater: é satírico? É combate politico? É violação da individualidade? COMO CLASSIFICAR este vídeo?https://www.cbsnews.com/video/presidents-words-used-to-create-deepfakes-at-davos/

FEV20 COMBATER  MAS PERIGOS Any policy response seeking to address deepfakes would likely face  constitutional and other legal challenges along with the technical challenges of detection. Key policy questions include:
What is the maturity of deepfake detection technology? How much progress have federal programs and public-private partnerships made in developing such technology? What expertise will be required to ensure detection keeps pace with deepfake technology?
What rights do individuals have to their privacy and likenesses? What rights do creators of deepfakes have under the First Amendment? What policy options exist regarding election interference? What policy options exist regarding exploitation and image abuse, such as non-consensual pornography?
What can be done to educate the public about deepfakes? Should manipulated media be marked or labeled? Should media be traceable to its origin to determine authenticity?
What should the roles of media outlets and social media companies be in detecting and moderating content that has been altered or falsified? https://www.gao.gov/products/gao-20-379sp

JAN20 Impossivel de combater? Speaking to Yahoo Finance Truepic CEO Jeff McGregor said the rapid advancement of A.I., and the scale at which visual deception is being disseminated, poses major challenges to researchers developing tools for detection. Instead of trying to detect what’s false, McGregor stated companies should focus on establishing what’s real.” LINK
JAN20 AI precisa de ser regulada, diz Google  
“Não há dúvida de que a Inteligência Artificial precisa ser regulada. É importante demais para não o fazer. A única questão é como abordar isso. Empresas como a nossa não podem apenas construir novas tecnologias promissoras e deixar que as forças do mercado decidam como será usada. É igualmente nosso dever garantir que a tecnologia seja aproveitada para o bem e esteja disponível para todos.”Sundar Pichai CEO do Google e da Alphabet LINK
JAN20: Reddit has announced updates to its impersonation policy to ensure it’s prepared for bad actors trying to manipulate its platform with malicious deepfakes and other manipulated content. LINK
 USA
Lançar concurso…
Dez19 Congress hopes a $5 million prize competition will unlock the secret to automatically detecting deepfakes. The annual defense policy bill, which the president signed into law Dec. 20, called on the Intelligence Advanced Research Projects Activity to start the competition as a way to stimulate the research, development, or commercialization of technologies that can automatically detect deepfakes. Congress authorized up to $5 million in cash prizes for the competition. + https://federalnewsnetwork.com/defense-main/2020/01/congress-charges-iarpa-with-creating-prize-challenges-for-5g-deepfake-detection/

McAfee is developing a deepfake detection framework, according to the researchers, using computer vision and deep learning techniques. However, no specifics were provided as to whether major social media platforms have begun using tools to spot false content. https://siliconangle.com/2020/02/26/never-ending-war-security-experts-battle-malware-deepfakes-nation-states/

FEV20 ZeroFOX has picked up an additional $74 million in funding that will be employed in part to advance the development of artificial intelligence (AI) tools capable of identifying fake digital content, including deepfake videos. https://securityboulevard.com/2020/02/zerofox-raises-74m-to-combat-deepfakes-with-ai/

Difiicl combater quando…
Showmetech: Como identificar se um vídeo é deepfake?
Bruno Sartori: Eu acho que hoje para identificar se ele é deepfake ou não a gente tem que olhar mais é o contexto, porque os vídeos estão muitos perfeitos. Por exemplo, antes os rostos ficavam um pouco mais embaçados, os dentes ficavam unidos. Hoje, o processo está tão avançado que a gente já superou esses obstáculos e está difícil até para eu descobrir se o vídeo é falso ou não, mesmo olhando e trabalhando bastante com isso. LINK


Como os estados vão combater?
Mar20 Uma conclusão dos participantes da conferência, intitulada “Pílula vermelha vs. pílula azul: como deepfakes estão definindo a realidade digital”, foi a de que a justiça tem “um problema enorme” pela frente. https://www.conjur.com.br/2020-mar-05/justica-aprender-lidar-provas-deepfakes
In mid-September 2018, two Democrats and one Republican representative sent a letter to the director of national intelligence asking the intelligence community to assess the possible national security threats posed by deepfake technology and present a report to Congress by the end of 2018. Lawmakers cited the potential for foreign adversaries to use deepfake videos against U.S. interests as a key reason to investigate them. https://www.poynter.org/ifcn/anti-misinformation-actions/#us
9/12/19: House Passes Bipartisan Legislation on STEM, Deepfakes, Sustainable Chemistry, and Biotechnology (LINK
JAN20. Can deepfakes be regulated? (LINK)
BRASIL: JAN 20 Na tentativa de conter a disseminação das "deepfakes", o Tribunal Superior Eleitoral criou novas medidas na legislação e um grupo de combate específico. O juiz-auxiliar da Presidência do TSE e coordenador do Programa de Enfrentamento à Desinformação, Ricardo Fiorezze, disse que, além de banir os vídeos manipulados, a ideia é também divulgar a informação verdadeira LINK
CHINA: Em visto que o número de usuários das plataforma online de áudio e vídeo aumenta significativamente e há potencial de abuso para novas tecnologias como "deepfake", esse tipo de regulamento é necessário, afirmou a Administração do Ciberespaço da China, citando problemas como a disseminação de informações ilegais e maliciosas e a violação dos direitos e interesses legítimos da população. http://portuguese.xinhuanet.com/2019-11/30/c_138595549.htm
LEIS Like in the case of other fake news and disinformation, questions around attribution and responsibility arise. For one, who should be responsible for regulating the content? Is it the video creator? Those who share it? Or the platforms that they are shared on? (LINK)
-        puts the emphasis on training staff. (LINK)
-       When it comes to the area of Deepfake, emerging technology like blockchain can come to the fore to provide some levels of security, approval and validation. Blockchain has typically been touted as a visibility and transparency play, where once something is done, the who and when becomes apparent; but it can go further (LINK)
19/11 Mais sobre blockchain:



-       





 Jan20Criticas: Policing deepfakes isn’t simple. As Facebook pointed out in its announcement last week, media can be manipulated for benign reasons, for example to make video sharper and audio clearer. Some forms of manipulation are clearly meant as jokes, satires, parodies or political statements — as, for example, when a rock star or politician is depicted as a giant. That’s not Facebook’s concern. LINK


Facebook helped Reuters create an online course on identifying deepfakes LINK
Fev20 Reuters built a prototype for automated news videos using Deepfakes tech LINK



DIFICULDADES EM COMBATER

os pesquisadores que tentam detectar os “deepfakes”, vídeos manipulados para substituir rostos ou alterar a fala de personalidades, enfrentam técnicas de falsificação cada vez mais avançadas e acessíveis ao grande público. Na Índia, um jornalista e um parlamentar foram alvo de vídeos obscenos manipulados. Na Bélgica, o partido socialista flamengo representou o presidente dos Estados Unidos, Donald Trump, incitando Bruxelas a se retirar do acordo climático de Paris. A mensagem que alertava para a edição do vídeo não foi compreendida por inúmeros internautas. Técnicas: Para detectar as manipulações, múltiplos caminhos estão sendo estudadas. A primeiro, que só se aplica a figuras já amplamente filmadas e fotografadas, consiste em encontrar imagens originais anteriores à manipulação para comparar o vídeo suspeito com a “assinatura gestual” habitual da pessoa em questão.
Um segundo foca nos defeitos gerados pela manipulação (uma incoerência no piscar dos olhos, o caimento do cabelo ou a ligação entre as imagens), mas as tecnologias se adaptam e os “apagam” progressivamente.
O terceiro caminho consiste em gerar modelos de inteligência artificial para detectarem sozinhos os vídeos manipulados. As taxas de sucesso são muito boas, mas dependem dos exemplos disponíveis. “Um detector de ‘deepfakes’ que funcionava há um ano não necessariamente funcionará neste ano”, explica Vincent Nozick. https://blogs.ne10.uol.com.br/mundobit/2019/09/13/a-deteccao-de-deepfakes-uma-corrida-contra-o-tempo/




No comments:

Post a Comment