mar24
In the long term, deepfakes may eventually become indistinguishable from real imagery. When that day comes, we can no longer rely on detection as a strategy. So, what do we have left that AI cannot deepfake? Here are two things: physical reality itself and strong cryptography, which is about strongly and verifiably connecting data to a digital identity.
https://techxplore.com/news/2024-04-deepfake-wrong.html~
jan24
A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.
The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.
Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.
The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.
https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud
Popular search engines like Google and Bing are making it easy to surface nonconsensual deepfake pornography by placing it at the top of search results, NBC News reported Thursday.
These controversial deepfakes superimpose faces of real women, often celebrities, onto the bodies of adult entertainers to make them appear to be engaging in real sex. Thanks in part to advances in generative AI, there is now a burgeoning black market for deepfake porn that could be discovered through a Google search, NBC News previously reported.
NBC News uncovered the problem by turning off safe search, then combining the names of 36 female celebrities with obvious search terms like "deepfakes," "deepfake porn," and "fake nudes." Bing generated links to deepfake videos in top results 35 times, while Google did so 34 times. Bing also surfaced "fake nude photos of former teen Disney Channel female actors" using images where actors appear to be underaged.
nov23
There is a small academic field, called media forensics, that seeks to combat these fakes. But it is “fighting a losing battle,” a leading researcher, Hany Farid, has warned. Last year, Farid published a paper with the psychologist Sophie J. Nightingale showing that an artificial neural network is able to concoct faces that neither humans nor computers can identify as simulated. Ominously, people found those synthetic faces to be trustworthy; in fact, we trust the “average” faces that A.I. generates more than the irregular ones that nature does.
https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review
nov23
Developers of artificial intelligence platforms could soon release technology that allows users to make images and videos that would be nearly indistinguishable from reality.
Companies such as OpenAI, the developer behind the popular ChatGPT platform, and other AI companies are nearing the release of tools that will allow the creation of widespread and realistic fake videos as early as next year, according to a report from Axios.
https://www.foxnews.com/us/deepfakes-indistinguishable-reality-2024-report-warns
set23 (não resulta)
A study of people's ability to detect "deepfakes" has shown humans perform fairly poorly, even when given hints on how to identify video-based deceit.
https://phys.org/news/2023-09-humans-easily-deepfakes.html
agos23
Humans are able to detect artificially generated speech only 73% of the time, a study has found, with the same levels of accuracy found in English and Mandarin speakers.
Researchers at University College London used a text-to-speech algorithm trained on two publicly available datasets, one in English and the other in Mandarin, to generate 50 deepfake speech samples in each language.
Deepfakes, a form of generative artificial intelligence, are synthetic media that is created to resemble a real person’s voice or the likeness of their appearance.
The sound samples were played for 529 participants to see whether they could detect the real sample from fake speech. The participants were able to identify fake speech only 73% of the time. This number improved slightly after participants received training to recognise aspects of deepfake speech.
https://www.theguardian.com/technology/2023/aug/02/humans-can-detect-deepfake-speech-only-73-of-the-time-study-finds
https://www.popsci.com/technology/audio-deepfake-study/
mar23
Recent rapid advancements in deepfake technology have allowed
the creation of highly realistic fake media, such as video, image,
and audio. These materials pose significant challenges to human
authentication, such as impersonation, misinformation, or even a
threat to national security. To keep pace with these rapid advancements,
several deepfake detection algorithms have been proposed,
leading to an ongoing arms race between deepfake creators and
deepfake detectors. Nevertheless, these detectors are often unreliable
and frequently fail to detect deepfakes. This study highlights
the challenges they face in detecting deepfakes, including (1) the
pre-processing pipeline of artifacts and (2) the fact that generators
of new, unseen deepfake samples have not been considered when
building the defense models. Our work sheds light on the need for
further research and development in this field to create more robust
and reliable detectors.
Why Do Deepfake Detectors Fail?
Binh Le
Sungkyunkwan University
South Korea
bmle@g.skku.edu
Shahroz Tariq
CSIRO’s Data61
Australia
shahroz.tariq@data61.csiro.au
Alsharif
jan23
Two years ago, Microsoft's chief scientific officer Eric Horvitz, the co-creator of the spam email filter, began trying to solve this problem. "Within five or ten years, if we don't have this technology, most of what people will be seeing, or quite a lot of it, will be synthetic. We won't be able to tell the difference.
"Is there a way out?" Horvitz wondered.
Eventually, Microsoft and Adobe joined forces and designed a new feature called Content Credentials, which they hope will someday appear on every authentic photo and video.
Here's how it works:
Imagine you're scrolling through your social feeds. Someone sends you a picture of snow-covered pyramids, with the claim that scientists found them in Antarctica – far from Egypt! A Content Credentials icon, published with the photo, will reveal its history when clicked on.
"You can see who took it, when they took it, and where they took it, and the edits that were made," said Rao. With no verification icon, the user could conclude, "I think this person may be trying to fool me!"
https://www.cbsnews.com/news/creating-a-lie-detector-for-deepfakes-artificial-intelligence/
jan23
Speech deepfakes are artificial voices generated by machine learning models. Previous literature
has highlighted deepfakes as one of the biggest threats to security arising from progress in AI due
to their potential for misuse. However, studies investigating human detection capabilities are
limited. We presented genuine and deepfake audio to n = 529 individuals and asked them to
identify the deepfakes. We ran our experiments in English and Mandarin to understand if
language affects detection performance and decision-making rationale. Detection capability is
unreliable. Listeners only correctly spotted the deepfakes 73% of the time, and there was no
difference in detectability between the two languages. Increasing listener awareness by providing
examples of speech deepfakes only improves results slightly. The difficulty of detecting speech
deepfakes confirms their potential for misuse and signals that defenses against this threat are
needed.
Warning: Humans Cannot Reliably Detect Speech Deepfakes
Kimberly T. Mai1,2, Sergi Bray1,2, Toby Davies1, and Lewis D. Griffin2
1Department of Security and Crime Science, University College London
2Department of Computer Science, University College London
nov22
Remember Will Smith's brilliant performance in that remake of
? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.
Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.
https://www.irishexaminer.com/news/arid-41006312.html
Ag22
Deepfake detection loses accuracy somewhere between your brain and your mouth
A team of neuroscientists researching deepfakes at the University of Sydney have found that people seem to be able to identify them at a subconscious level more frequently than at a conscious one.
A new paper titled ‘Are you for real? Decoding realistic AI-generated faces from neural activity’ describes how brain activity reflects whether a presented image is real or fake more accurately than simply asking the person.
The researchers used electroencephalography (EEG) signals to measure the neurological responses of subjects, and found a consistent neural response associated with faces for approximately 170 milliseconds. When the face was real, and only then, the response was sustained beyond 400ms.
https://www.biometricupdate.com/202208/deepfake-detection-loses-accuracy-somewhere-between-your-brain-and-your-mouth
jul22
When it comes to deepfakes, what if fact could be distinguished from fraud? University of Sydney neuroscientists have discovered a new way: they have found that people’s brains can detect AI-generated fake faces, even though people could not report which faces were real and which were fake.
Deepfake videos, images, audio, or text appear to be authentic, but in fact are computer generated clones designed to mislead you and sway public opinion. They are the new foot soldiers in the spread of disinformation and are rife – they appear in political, cybersecurity, counterfeiting, and border control realms.
https://www.sydney.edu.au/news-opinion/news/2022/07/11/your-brain-is-better-at-busting-deepfakes-than-you-.html
jun22
Deepfaked Online Content is Highly Effective in Manipulating Attitudes & Intentions
Disinformation has spread rapidly through social media and news sites, biasing our (moral) judgements of individuals and groups. “Deepfakes”, a new type of AI-generated media, represent a powerful tool for spreading disinformation online. Although they may appear genuine, Deepfakes are hyper-realistic fabrications that enable one to digitally control another person’s appearance and actions. Across five studies (N = 2033) we examined the psychological impact of Deepfakes on viewers. Participants were exposed to either genuine or Deepfaked online content, after which their (implicit) attitudes and sharing intentions were measured. We found that Deepfakes quickly and effectively allow their creators to manipulate public perceptions of a target in both positive and negative directions. Many are unaware that Deepfaking is possible, find it difficult to detect when they are being exposed to it, and neither awareness nor detection serves to protect them from its influence
(estudo)
jun22
Deepfakes: How easy are they to make and detect?
ISACA Redstone Arsenal
May 16, 2022
Catherine Bernaciak, PhD
Shannon Gallagher, PhD
Dominic Ross
(está nos apoios)
ma22
We present a real-world use case for spoofing voice authentication in a customer care call center. Based on this scenario, we evaluate the feasibility of attacking such a system and create an attacker profile. For this purpose, we examine three available speech synthesis tools and discuss their usability. We use these tools and acquired knowledge to generate a dataset including deepfake speech and assess the resilience of voice biometrics systems against deepfakes.
https://dl.acm.org/doi/abs/10.1145/3477314.3507013
ab22
dicas para combater
https://www.latestly.com/socially/social-viral/fact-check/7-you-can-identify-deepfake-videos-with-the-same-process-take-a-screenshot-from-the-latest-tweet-by-snopes-com-3545724.html
mar22
According to experts interviewed for the study, the amount of manipulated audiovisual content will grow exponentially as sophisticated deepfake technology is likely to become accessible to the general public within the next two to three years. Although the technology could have positive applications for democracy, e.g., in creating satire, the report warns that deepfakes have the ability to cause major societal harms. Media and journalists could become hesitant to use video evidence if they have to check all content for authenticity; legal proceedings may require longer investigations in order to rule out fabricated evidence; elections can be disturbed by fake clips discrediting political opponents; and the growth of deepfake pornography could negatively impact the position of women in society.
https://merlin.obs.coe.int/article/9415
mar22
Scientists at the MIT Media Lab showed almost 6,000 people 16 authentic political speeches and 16 that were doctored by AI. The soundbites were presented in permutations of text, video, and audio, such as video with subtitles or only text. The participants were told that half of the content was fake, and asked which snippets they believed were fabricated. When shown text alone, the respondents were only barely better at identifying falsehoods (57% accuracy) than random guessing. They were a bit more accurate when given video with subtitles (66%), and far more successful when shown both video and audio (82%). The study authors said the participants relied more on how something was said than the speech content itself:
The finding that fabricated videos of political speeches are easier to discern than fabricated text transcripts highlights the need to re-introduce and explain the oft-forgotten second half of the ‘seeing is believing’ adage.
https://thenextweb.com/news/deepfakes-study-finds-doctored-text-is-more-manipulative-than-phony-video-mit-media-lab
Porém, os voluntários apresentaram um índice de precisão bem pior ao analisarem textos puros: a taxa de acerto se fixou em 57% das vezes, mostrando que dependendo das circunstâncias, mentir de forma escrita continua sendo muito mais eficaz do que criar vídeos falsos de personalidades. E nem é preciso contar com uma IA para redigir informações falsas. https://tecnoblog.net/meiobit/457628/textos-falsos-mais-perigosos-deepfake-ferramentas-ia/
jan22
As the technology continues to improve and fake videos proliferate, there is uncertainty
about how people will discern genuine from manipulated videos, and how this will affect
trust in online content. This paper conducts a pair of experiments aimed at gauging
the public's ability to detect deepfakes from ordinary videos, and the extent to which
content warnings improve detection of inauthentic videos. In the first experiment, we
consider capacity for detection in natural environments: that is, do people spot deep-
fakes when they encounter them without a content warning? In the second experiment,
we present the first evaluation of how warning labels affect capacity for detection, by
telling participants at least one of the videos they are to see is a deepfake and observing
the proportion of respondents who correctly identify the altered content. Our results
show that, without a warning, individuals are no more likely to notice anything out of
the ordinary when exposed to a deepfake video of neutral content (32.9%), compared
to a control group who viewed only authentic videos (34.1%). Second, warning labels
improve capacity for detection from 10.7% to 21.6%; while this is a substantial increase,
the overwhelming majority of respondents who receive the warning are still unable to
tell a deepfake from an unaltered video. A likely implication of this is that individuals,
lacking capacity to manually detect deepfakes, will need to rely on the policies set by
governments and technology companies around content moderation.
Do Content Warnings Help People Spot a Deepfake? Evidence from Two
jan22
People can't distinguish deepfakes from real videos, even if they are warned about their existence in advance, The Independent has reported, citing a study conducted by the University of Oxford and Brown University.One group of participants watched five real videos, and another watched four real videos with one deepfake, after which viewers were asked to identify which one was not real. https://sputniknews.com/20220115/people-cant-distinguish-deepfake-from-real-videos-even-if-warned-in-advance-study-says-1092275613.htmlA new tool is able to distinguish a Deepfake from a real video thanks to the analysis of the corneas. With this AI nothing can fool us anymore. It has always been said that the eyes are the mirror of the soul and it seems that the proverb is right again because just because of the distinctive brightness we have, an AI is able to differentiate a real video from one with Deepfake. The program that puts on our face on the body of another. Many have done it for entertainment or to see how it would look in a superhero movie. Although others have used it to put celebrities in porn or similar things. The worst use that Deepfake has been given is as promote disinformation campaigns. Lest we be fooled, the University at Buffalo has created the AI that knows if we are facing a real video or not. It all does it through eye analysis. This tool, which has been 94% successful in its tests, does an analysis of the corneas. https://cvbj.biz/the-glitter-of-the-eyes-is-the-trick-to-detect-deepfakes-with-this-ai-technology.html
One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”
Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. However, as the events of last October highlighted, Americans must be cautious about who they depend on for determining between deepfake and reality. Governments have vested interests in defending their dominion, and will not hesitate to engage in manipulation to advance that goal: Our political authorities cannot and should not be the sole arbiter of truth. “I feel like it can’t be government oversight,” Littman said, “but it can be that people sign on to expectations or systems that make it easier to track the provenance of the information.”
DENUNCIAR É PIOR??
. A false accusation on Twitter will travel around the world a thousand times in the time it takes the truth to travel a mile. And even if the belated truth does manage to make it around the world, it will never reach everyone who last knew you for the “you” in the deepfake. Video doesn’t lie, right? So why follow updates about the event any more?
Yet while deepfake technology can and is being used for horrible things, the technology itself is not evil. It will be transformative for entertainment, marketing and even health industries; already deepfake tech enables some who have lost their voice to speak again, sounding precisely like themselves.
People are the problem. How do you solve that? There are limited steps you can take, but primarily, stop posting so much of your life on social media. Any video of you (or your spouse, parent, or child) is fodder for deepfake AI. If you can’t stop posting, at least limit the audience for the videos you post. https://www.telegraph.co.uk/news/2021/10/30/deepfaked-committing-crime-should-worried/
https://www.youtube.com/watch?v=ccI0SCA04X4
When photojournalist Jonas Bendiksen released a book about the fake news industry, he got messages from people thanking him for covering such an important issue. But he says no-one noticed one thing about the book: everything in it was also fake. "The whole intention was for people to find this out … the problem is that didn't happen," Bendiksen told The Current's Matt Galloway. https://www.cbc.ca/radio/thecurrent/the-current-for-oct-20-2021-1.6217855/this-photojournalist-faked-an-entire-book-to-highlight-how-hard-it-is-to-spot-misinformation-1.6218251
Dispelling Misconceptions and Characterizing the Failings of Deepfake Detection
Ian Goodfellow, who invented GANs and has devoted much of his career to studying security issues within machine learning, says he doesn’t think we will be able to know if an image is real or fake simply by “looking at the pixels.” Instead, we’ll eventually have to rely on authentication mechanisms like cybernetic signatures for photos and videos. Perhaps someday every camera and mobile phone will inject a digital signature into every piece of media it records. One startup company, Truepic, already offers an app that does this. The company’s customers include major insurance companies that rely on photographs from their customers to document the value of everything from buildings to jewelry. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440
What you need to know about spotting deepfakes
hat can companies do except wait for the arrival of anti-deepfake tools? There are several steps businesses can take right now to defend against deepfakes:
1. Know your enemy: Make sure your security staff have this new threat on their radar and follow deepfake-related threat intelligence closely to be prepared for the looming inflexion point.
2. Have a plan: Establish incident response workflows and escalation procedures for this kind of attack. Remember: deepfakes utilise IT in the form of AI, but they are a business-level attack. So incident response will involve top management, IT security, finance, legal, and PR teams.
3. Spread the word: Include deepfakes in security awareness campaigns to inform the workforce about this threat. Teach staff to take a step back whenever they have doubts about information they receive, even credible audio/video information. Start with targeted awareness trainings for high-risk individuals such as C-level management, middle management, and the finance department.
4. Create channels: Establish workflows so that workers know how they can verify any information that arrives via a potential deepfake message. Just like two-factor authentication has become the de-facto standard for access security, double-checking critical information must become standard procedure as we approach the age of AI-generated misinformation. Also, there should be a backup channel: if verifying some information isn’t possible at the moment (attackers usually apply time pressure: “this is urgent!”), there must be an alternative mode of communication. Workers should be equipped with easy-to-use multi-channel communication tools they can effortlessly utilise even in times of time-pressure or crisis.
5. Rethink corporate culture: Even with old-fashioned BEC, the catalyst for a successful attack often is that workers don’t dare to question orders they were – seemingly – given by their superiors. They worry that their boss might be mad if they asked for confirmation. So an effective first line of defense is to re-evaluate the corporate culture, and aim for a flatter hierarchy where it’s “not a big thing” to use common sense and call one’s superior to ask: “Did you really just video-message me to transfer amount X to account Y in obscure offshore country Z?” https://gadget.co.za/tackling-the-new-deepfake-threat-how-to-fight-an-evil-genie/
Basically, it’s relatively easy to implant false memories. Getting rid of them is the hard part. The study was conducted on 52 subjects who agreed to allow the researchers to attempt to plant a false childhood memory in their minds over several sessions. After awhile, many of the subjects began to believe the false memories. The researchers then asked the subjects’ parents to claim the false stories were true. The researchers discovered that the addition of a trusted person made it easier to both embed and remove false memories. Per the paper: "The present study therefore not only replicates and extends previous demonstrations of false memories but, crucially, documents their reversibility after the fact: Employing two ecologically valid strategies, we show that rich but false autobiographical memories can mostly be undone. Importantly, reversal was specific to false memories (i.e., did not occur for true memories)."
What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and visages of people we trust? If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact. How many law enforcement agencies in the world currently have an explicit policy against using manipulated media in the solicitation of a confession? Our guess would be: close to zero. With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos.
https://thenextweb.com/neural/2021/03/23/how-deepfakes-could-help-implant-false-memories-in-our-minds/
What is being done to combat such convincing forms of misinformation?
Study warns deepfakes can fool facial recognition In a paper published on the preprint server Arxiv.org, researchers at Sungkyunkwan University in Suwon, South Korea demonstrate that APIs from Microsoft and Amazon can be fooled with commonly used deepfake-generating methods. In one case, one of the APIs — Microsoft’s Azure Cognitive Services — was fooled by up to 78% of the deepfakes the coauthors fed it.
Researchers showed detectors can be defeated by inserting inputs called adversarial examples into every video frame. The adversarial examples are slightly manipulated inputs that cause artificial intelligence systems such as machine learning models to make a mistake. In addition, the team showed that the attack still works after videos are compressed.
jan21
New research finds that misinformation consumed in a video format is no more effective than misinformation in textual headlines or audio recordings. However, the persuasiveness of deepfakes is equal and comparable to these other media formats like text and audio.
Seeing is not necessarily believing these days. Based on these findings, deepfakes do not facilitate the dissemination of misinformation more than false texts or audio content. However, like all misinformation, deepfakes are dangerous to democracy and media trust as a whole. The best way to combat misinformation and deepfakes is through education. Informed digital citizens are the best defense. https://digitalcontentnext.org/blog/2021/01/25/how-powerful-are-deepfakes-in-spreading-misinformation/
nov20
Facially manipulated images and videos or DeepFakes can be used maliciously to fuel misinformation or defame individuals. Therefore, detecting DeepFakes is crucial to increase the credibility of social media platforms and other media sharing web sites. (do texto Adversarial Threats to DeepFake Detection: A Practical Perspective)
nov20
There’s no question that the “hoax” ad, if aired within 30 days of an election, could plausibly satisfy all the elements of Texas’s statute. That’s a serious problem for free political expression. LINK + video
Nov20
“There are now businesses that sell fake people. On the website Generated.Photos, you can buy a “unique, worry-free” fake person for $2.99, or 1,000 people for $1,000. If you just need a couple of fake people — for characters in a video game, or to make your company website appear more diverse — you can get their photos for free on ThisPersonDoesNotExist.com. Adjust their likeness as needed; make them old or young or the ethnicity of your choosing. If you want your fake person animated, a company called Rosebud.AI can do that and can even make them talk.” LINK + The New York Times this week did something dangerous for its reputation as the nation’s paper of record. Its staff played with a deepfake algorithm, and posted online hundreds of photorealistic images of non-existent people. For those who fear democracy being subverted by the media, the article will only confirm their conspiracy theories. The original biometric — a face — can be created in as much uniqueness and diversity as nature can, and with much less effort. LINKA Nanyang Technological University, Singapore (NTU Singapore) survey of 1,231 Singaporeans has found that deepfakes are easily fooling people into thinking fake news is actually real. What's more worrying is that these AI-powered tools are also deceiving those who claim to be aware of deepfakes in the first place. https://sea.mashable.com/tech/13337/think-you-can-spot-a-deepfake-survey-proves-that-even-the-best-get-fooled + One in three who are aware of deepfakes say they have inadvertently shared them on social media. https://www.sciencedaily.com/releases/2020/11/201124092134.htm
Beating deepfakes requires a shift in strategy
Instead of removing these deepfake media upon discovery (a move that will only promote the narrative being pushed) authorities and online platforms must call out the misuse for what it is, thereby undermining the actors. Ultimately, this will contribute to the most pressing concern in the online arena, namely countering the toxic narratives that are breeding extremism and militant violence.
“In Event of Moon Disaster,” produced by the MIT Center for Advanced Virtuality, combines edited archival NASA footage and an artificial intelligence-generated synthetic video of a Nixon speech, along with materials to demystify deepfakes. After our video was released, it reached nearly a million people within weeks. It was circulated on social platforms and community sites, demonstrating the potent combination of synthetic media’s capacity to fool the eyes and social media’s capacity to reach eyeballs. The numbers matched up: In an online quiz, 49 percent of people who visited our site said they incorrectly believed Nixon’s synthetically altered face was real and 65 percent thought his voice was real. https://www.bostonglobe.com/2020/10/12/opinion/july-we-released-deepfake-mit-within-weeks-it-reached-nearly-million-people/
Expert says detecting deepfakes almost impossible. https://www.axios.com/deepfakes-technology-misinformation-problem-71bb7f2b-5dc2-4fbd-9b56-01ad430c1a4e.html
Popular deepfake apps are making it easier than ever to make AI-powered manipulated videos — spawning new memes, and an increased potential for abuse. https://www.businessinsider.com/ai-deepfake-apps-memes-misinformation-2020-9
(é cada vez mais dificil combater se se tornam banais...)
Windheim is part of a new group of online creators who are toying with deepfakes as the technology grows increasingly accessible and seeps into internet culture. The phenomenon is not surprising; media manipulation tools have often gained traction through play and parody. But it also raises fresh concerns about its potential for abuse. “There’s a fine line between using deepfakes for entertainment and memes, and using them for harm,” Windheim says. “In this tutorial, I’m saying, ‘This is how you make this particular deepfake.’ But the scary thing about the script is it can just be applied to make any type of deepfake you want.” Memers are making deepfakes, and things are getting weird. The rapidly increasing accessibility of the technology raises new concerns about its abuse.
DeFaking Deepfakes: Understanding Journalists' Needs for Deepfake Detection https://www.usenix.org/conference/soups2020/presentation/sohrawardi
A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school. An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.https://www.biometricupdate.com/202007/deepfakes-some-progress-in-video-detection-but-its-back-to-the-basics-for-faked-audio
Deepfakes and the New AI-Generated Fake Media Creation-Detection Arms Race. Manipulated videos are getting more sophisticated all the time—but so are the techniques that can identify them
Facebook has revealed the winners of a competition for software that can detect deepfakes, doctored videos created using artificial intelligence. And the results show that researchers are still mostly groping in the dark when it comes to figuring out how to automatically identify them before they influence the outcome of an election or spark ethnic violence. The best algorithm in Facebook’s contest could accurately determine if a video was real or a deepfake just 65% of the time. https://fortune.com/2020/06/12/deepfake-detection-contest-facebook/ +https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
Ab20
AB20 In a paper published this week on the preprint server Arxiv.org, researchers from Google and the University of California at Berkeley demonstrate that even the best forensic classifiers — AI systems trained to distinguish between real and synthetic content — are susceptible to adversarial attacks, or attacks leveraging inputs designed to cause mistakes in models. Their work follows that of a team of researchers at the University of California at San Diego, who recently demonstrated that it’s possible to bypass fake video detectors by adversarially modifying — specifically, by injecting information into each frame — videos synthesized using existing AI generation methods.
https://venturebeat.com/2020/04/08/researchers-fool-deepfake-detectors-into-classifying-fake-images-as-real/
Mar20: Who's Responsible For Combatting Deepfakes In The 2020 Election? Lawmakers and tech leaders have a serious puzzle to solve. While freedom of speech is crucial, the law doesn’t tolerate hate speech, threatening speech or slander. When people spread lies and fake depictions of political figures via social media platforms with the intention of misinforming the public, whose job is it to police this material, and can it be policed? https://www.forbes.com/sites/forbestechcouncil/2020/03/04/whos-responsible-for-combatting-deepfakes-in-the-2020-election/#6b72ba4e1c05
No comments:
Post a Comment