Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.
The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.
As AI photo editing apps become more accessible and pervasive, software and hardware makers are building tools to help consumers verify the authenticity of an image starting from the moment of capture.
Driving the news: Leica announced Wednesday that its new M 11-P camera will be the first with the ability to apply Content Credentials from the moment an image is captured.
A method to restore deepfakes to the original content has been developed by a National Institute of Informatics team.
Computer scientists at the University of Waterloo figured out how to successfully fool voice authentication systems 99% of the time using deepfake voice creation software.
Andre Kassis, a Computer Security and Privacy PhD candidate at Waterloo, who is also the lead author of this research study, explains how voice authentication works:
Tencent Cloud, a cloud service provider branch of China-based technology platform Tencent launched its new digital human production platform that will users to create deepfakes of anyone, stated Cointelegraph. It is expected that the deepfakes will be created with the reference from a three-minute video and about 100-sentence voice messages.
Sources revealed that Tencent Cloud’s deepfake generator will use its own artificial intelligence (AI) methods for recreating fake videos of people. It is believed that with the help of deepfake videos fraud actions have taken place by impersonating prominent faces for misleading investors, Cointelegraph highlighted.
https://www.financialexpress.com/business/blockchain-tencent-cloud-launches-its-deepfake-creation-tool-3070184/
jan23
Two years ago, Microsoft's chief scientific officer Eric Horvitz, the co-creator of the spam email filter, began trying to solve this problem. "Within five or ten years, if we don't have this technology, most of what people will be seeing, or quite a lot of it, will be synthetic. We won't be able to tell the difference.
"Is there a way out?" Horvitz wondered.
Eventually, Microsoft and Adobe joined forces and designed a new feature called Content Credentials, which they hope will someday appear on every authentic photo and video.
Here's how it works:
Imagine you're scrolling through your social feeds. Someone sends you a picture of snow-covered pyramids, with the claim that scientists found them in Antarctica – far from Egypt! A Content Credentials icon, published with the photo, will reveal its history when clicked on.
"You can see who took it, when they took it, and where they took it, and the edits that were made," said Rao. With no verification icon, the user could conclude, "I think this person may be trying to fool me!"
https://www.cbsnews.com/news/creating-a-lie-detector-for-deepfakes-artificial-intelligence/
Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.
FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.
https://www.theregister.com/2022/11/15/intel_fakecatcher/
set22
Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.
To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.
https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104
ago22
A cybersecurity expert is puzzled by recent actions taken by a group of researchers working at the Samsung AI Centre in Moscow, saying their work might inevitably end up doing more harm than good.
In a research paper, they wrote that they have invented something called Mega Portraits, which is short for megapixel portraits, based on a concept called neural head avatars, which, they said, “offer a new fascinating way of creating virtual head models. They bypass the complexity of realistic physics-based modeling of human avatars by learning the shape and appearance directly from the videos of talking people.”
Lou Steinberg, the founder of CTM Insights, a New York City-based cybersecurity research lab and incubator, said intentionally edited images, also known as deepfakes, are a growing and troubling issue with possibilities that include editing a picture of someone to cause reputational/brand damage, often with AI tools that are becoming more capable.
Pesquisadores da Samsung Labs revelaram, na última semana, uma nova tecnologia de inteligência artificial que promete elevar o nível de criação de deepfakes. O método é capaz de gerar “imagens realistas de alta definição” de diferentes personalidades, utilizando uma única foto de origem.
Denominado “MegaPortraits”, o projeto se diferencia pela capacidade de criar avatares de cabeça neural mesmo quando a pessoa da foto original possui características físicas diferentes das encontradas no indivíduo cuja imagem será utilizada para fornecer os movimentos animados. Este é um grande desafio na aplicação da tecnologia.
https://www.tecmundo.com.br/internet/242413-tecnologia-samsung-cria-deepfakes-usando-qualquer-foto.htm
The proliferation of digitally altered videos of people, such as deepfakes, has seen scientists at the DSO National Laboratories devise tools to automatically detect them when they are used.
Since last year, the team has been employing artificial intelligence (AI) technology to pick up signs that may not be perceptible by humans.
These include poorly rendered fine details such as hair or unnatural lip movements.
https://www.straitstimes.com/singapore/dso-national-laboratories-mark-golden-jubilee-with-defence-tech-showcase
Google has quietly banned deepfake projects on its Colaboratory (Colab) service, putting an end to the large-scale utilization of the platform’s resources for this purpose.
Colab is an online computing resource that allows researchers to run Python code directly through the browser while using free computing resources, including GPUs, to power their projects.
Based on archive.org historical data, the ban took place earlier this month, with Google Research quietly adding deep fakes to the list of disallowed projects.
As noted on Discord by DFL developer ‘chervonij,’ those who attempt to train deepfakes on the Colab platform right now are served with the following error:
“You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”
The impact of this new restriction is expected to be far-reaching in the deepfake world, as many users utilize pre-trained models with Colab to jump-start their high-resolution projects.
Colab was making this process very easy even for those with no coding background, which is why so many tutorials suggest Google’s “free resource” platform to launch deepfake projects.
https://www.bleepingcomputer.com/news/google/google-quietly-bans-deepfake-training-projects-on-colab/
O Google começou a banir os projetos envolvendo o treinamento de mecanismos de aprendizado de máquina para a criação de deepfakes na plataforma Colab, conforme relata o BleepingComputer nesta segunda-feira (30). As mudanças na política de uso do serviço teriam sido implementadas no início de maio.
Serviço de nuvem gratuito hospedado pela empresa de Mountain View, o Google Colaboratory permite que pesquisadores utilizem a linguagem Python diretamente do navegador, aproveitando diversos recursos de computação, incluindo GPUs. O objetivo é incentivar o desenvolvimento de projetos usando inteligência artificial (IA).
https://www.tecmundo.com.br/internet/239512-google-proibe-projetos-treinamento-deepfake.htm
Nine of the Top 10 Liveness Detection Systems are Vulnerable to Deepfakes: Report
This software differentiates authentic images from deepfakes
With its technology, Truepic—a winner of Fast Company’s 2022 World Changing Ideas Awards—verifies the veracity of images, restoring trust in an age of disinformation.
New method detects deepfake videos with up to 99% accuracy.
Two-pronged technique detects manipulated facial expressions and identity swaps. Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
Being aware of potentially grave consequences, an alliance spanning the software, chips, cameras, and social media giants aims to create standards to ensure the authenticity of images and videos shared online. Known as the Coalition for Content Provenance and Authenticity (C2PA), the group consists of Photoshop developer Adobe, Microsoft, Intel and Twitter in cooperation with media outlet BBC and SoftBank-owned chip designer Arm, with the ultimate aim to fight deepfakes using blockchain technology.
Besides those, Japanese camera makers Sony and Nikon are a part of the coalition, to develop an open standard intended to work with any software showing evidence of tampering, as per Nikkei. Adobe’s content authenticity initiative’s senior director Andy Parsons even told Nikkei that we’ll “see many of these [features] emerging in the market this year. And I think in the next two years, we will see many sorts of end-to-end ecosystems.”
https://techhq.com/2022/04/sony-adobe-intel-among-tech-firms-taking-on-deepfakes-with-blockchain-technology/
Does Audio Deepfake Detection Generalize?
mar22
In late January 2022, Estonia gained its sixth tech unicorn after identity verification startup Veriff raised $100 million in Series C funding
Veriff is an AI-assisted identity verification and know your customer (KYC) platform used by companies around the world to ensure their customers are who they claim to be.
Most of the company's biggest customers are in global fintech, where it faces competition from the likes of authentication and verification services Jumio and Fido. Deepfake technology poses significant challenges for consumer authentication and verification. One fast-growing startup hopes that AI can also be a solution.
https://www.zdnet.com/article/in-a-world-of-deepfakes-this-billion-dollar-startup-wants-you-to-trust-ai-powered-id-checks/
fev22
Deep Learning is an effective technique and used in various fields of natural language processing, computer vision, image processing and machine vision. Deep fakes uses deep learning technique to synthesis and manipulate image of a person in which human beings cannot distinguish the fake one. By using generative adversarial neural networks (GAN) deep fakes are generated which may threaten the public. Detecting deep fake image content plays a vital role. Many research works have been done in detection of deep fakes in image manipulation. The main issues in the existing techniques are inaccurate, consumption time is high. In this work we implement detecting of deep fake face image analysis using deep learning technique of fisherface using Local Binary Pattern Histogram (FF-LBPH). Fisherface algorithm is used to recognize the face by reduction of the dimension in the face space using LBPH. Then apply DBN with RBM for deep fake detection classifier
ADOBE
An Adobe-led coalition of tech companies set up to combat deepfakes has released the first version of its technical specification for digital provenance. The Coalition for Content Provenance and Authenticity (C2PA), which also counts Microsoft, Arm, Intel TruePic and the BBC among its members, says the standard will allow content creators and editors to create media that can't secretly be tampered with. It allows them to selectively disclose information about who has created or changed digital content and how it has been altered. Platforms can define what information is associated with each type of asset - for example, images, videos, audio, or documents - along with how that information is presented and stored, and how evidence of tampering can be identified.
jan22
Researchers from the Ruhr-University Bochum in Germany have released a new report with suggestions on how to tackle voice deepfakes through the use of a novel dataset.
The research focuses mainly on the “image domain” as the researchers claimed that studies exploring generated audio signals have so far been neglected by global research. To this end, Joel Frank and Lea Schönherr researched three different aspects of the audio deepfake challenge to “narrow this gap.”
The first consists of an introduction to common signal processing techniques used for analyzing audio signals, including how to read spectrograms for audio signals, and Text-To-Speech (TTS) models. https://www.biometricupdate.com/202201/a-new-idea-to-fight-voice-deepfakes-from-ruhr-university-bochum-researchers
One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”
Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/
Germany’s federal government is expanding resources for a multi-year deepfake detection project that it is funding. Executives with BioID, a German biometric anti-spoofing vendor, say the firm has joined a consortium of organizations seeking effective methods of unmasking fraudulent, AI-based images, video and audio.
the app allows the user, at the press of a button, to capture photos, video, or audio on their smartphone or tablet, that cannot be altered without detection.HUNTINGTON BEACH, CA, UNITED STATES, October 21, 2021 /EINPresswire.com/ -- ImageKeeper® LLC announced today the release of its free Consumer app, “ProveIt-Now!™”. The app allows the user, at the press of a button, to capture photos, video, or audio on their smartphone or tablet, that cannot be altered without detection. With the proliferation of deepfakes, manipulated video, and edited photos, the threat of being victimized is real and growing according to the FBI’s recent Private Industry Notification (PIN) 210310-001, dated 20 March 2021. https://www.einnews.com/pr_news/554437136/free-proveit-now-app-protects-against-deepfakes-media-manipulation-fraud
Concept: Estonian startup Sentinel has released a solution to detect fake media content on the web. Its platform helps democratic governments, defense agencies, and enterprises stop the risk of AI-generated deepfakes with its protection technology. Nature of Disruption: Sentinel’s detection model is based on the Defense in Depth (DiD) approach. This model utilizes a multi-layer defense consisting of a vast database of deepfakes and neural network classifiers to detect deepfakes. Users need to upload digital media through the website or API. Sentinel’s system then analyzes the AI-forgery automatically to determine if the content, whether audio, video, or image is a deepfake or not. Finally, it shows a visualization of the manipulations done. https://www.medicaldevice-network.com/research-reports/sentinel-offers-ai-based-solution-to-detect-deepfakes/
Can Deepfake Fool Facial Recognition? A New Study Says Yes! Researchers at Sungkyunkwan University in South Korea tricked Amazon and Microsoft APIs with Deepfakes. Here's why that's worrying. Researchers at Sungkyunkwan University in Suwon, South Korea, tested the quality the current deepfake technology. They tested both Amazon and Microsoft APIs using open-source and commonly used deepfake video generating software to see how well they perform.
The researchers used the faces of Hollywood celebrities. In order to create solid deepfakes, the software needs a lot of high-quality images from different angles of the same persons, which are much easier to acquire of celebrities instead of ordinary people.
The researchers also decided to use Microsoft and Amazon’s API as the benchmarks for their study as both companies offer celebrity face recognition services. They used publicly available datasets and created just over 8,000 deepfakes. From each deepfake video, they extracted multiple faceshots and submitted it to the APIs is in question. https://www.makeuseof.com/deepfake-fool-facial-recognition/
How Blockchain Can Help Combat Disinformation
It’s not a cure-all, but it does have the potential to address many of the risks and root causes.
Blockchain has enormous potential: Blockchain systems use a decentralized, immutable ledger to record information in a way that’s constantly verified and re-verified by every party that uses it, making it nearly impossible to alter information after it’s been created. One of the most well-known applications of blockchain is to manage the transfer of cryptocurrencies like Bitcoin. But blockchain’s ability to provide decentralized validation and a clear chain of custody makes it potentially effective as a tool to track not just financial resources, but all sorts of forms of content. https://hbr.org/2021/07/how-blockchain-can-help-combat-disinformation
The drive to identify deepfakes in audiovisual media from genuine content has received a boost with a $700,000 international competition organised by AI Singapore, a national artificial intelligence (AI) programme under the National Research Foundation. The five-month-long Trusted Media Challenge aims to encourage AI enthusiasts and researchers around the world to design and test models and solutions that can detect modified audio and video, AI Singapore said in a statement on Thursday (July 15). By incentivising involvement of international contributors, and sourcing innovation ideas globally, the competition will also strengthen Singapore's position as a global AI hub, it added. https://www.straitstimes.com/tech/ai-singapore-launches-700k-competition-to-combat-deepfakes
Voice biometrics: the new defense against deepfakes
Visimo has partnered with Florida State University to help the U.S. Air Force create a technology for detecting and preventing deepfakes, which refer to synthetic or heavily altered media made to mislead the public. The company said Thursday it will develop the Aletheia software under phase one of USAF’s Small Business Technology Transfer program. The Department of Defense regards deepfake as an issue that threatens national security, specifically the military’s reliance on open-source intelligence. Aletheia, named after the Greek goddess of truth, discovery and disclosure, is envisioned to detect a wider range of deepfake types compared to existing systems. The future software would do this by analyzing digital fingerprints within fake media. The Aletheia project marks Visimo’s third award in the past year under USAF’s STTR program, which partners small businesses with academic institutions for technology development projects. https://blog.executivebiz.com/2021/07/visimo-florida-state-university-to-develop-deepfake-detector-for-usaf/
The University of Amsterdam (UvA) and the Netherlands Forensic Institute (NFI) started an investigation into the recognition of deepfakes, as well as, hidden messages from criminals. With deepfake technology, it is possible to impersonate someone in pictures or videos to the point where the viewer does not realize they are looking at a fake. “It is almost impossible to distinguish real from deepfake videos with the naked eye”, said Professor of Forensic Data, Zeno Geradts. Criminals are increasingly making use of deepfakes. Last month, Dutch, British and Baltic MPs had a conversation with a deepfake imitation of Russian opposition leader, Alexei Navalny. The politicians only realized the deception weeks after their talk occurred. https://nltimes.nl/2021/05/22/uva-nfi-launch-study-recognition-deepfakes
now the U.S. Army is introducing a lightweight deepfake detection method to preempt the national security concerns that will arise from the technology. “Due to the progression of generative neural networks, AI-driven deepfake advances so rapidly that there is a scarcity of reliable techniques to detect and defend against deepfakes,” explained C.-C. Jay Kuo, a professor of electrical and computer engineering at the University of Southern California. “There is an urgent need for an alternative paradigm that can understand the mechanism behind the startling performance of deepfakes and develop effective defense solutions with solid theoretical support.” https://www.datanami.com/2021/05/03/u-s-army-employs-machine-learning-for-deepfake-detection/ + https://www.helpnetsecurity.com/2021/05/07/defakehop-deepfake-detection-method/
A patent application published by Sony on April 22, 2021, reveals that the company is attempting to develop AI that will detect if a video has been deepfaked or otherwise tampered with. The patent application categorizes deepfake technology as “interesting and entertaining but potentially sinister.” However, there are methods for delving into an image or video and determining whether it has been altered, because certain artifacts or irregularities get left behind. These artifacts might include lighting or texture irregularities in a faked image that are not present in the original. Some of these might be discernible to a trained eye; others are undetectable unless processed through a neural network or AI program.
In response to the rise in media manipulation, DARPA has taken on many technologically complex projects to identify manipulated media.Most notably, DARPA’s Media Forensics program invested in developing a quantitative integrity score for the authenticity of images and videos. “We framed our approaches in the Media Forensics program … around three levels of integrity,” Turek said, “Digital, physical and semantic integrity.” https://ndsmcobserver.com/2021/03/lecture-explores-deepfakes-media-manipulation/
A new software programme that could make the internet a safer place has culminated in its developer Greg Tarr winning the 2021 BT Young Scientist & Technology Exhibition (BTYSTE).
The 17-year-old Leaving Certificate student at Bandon Grammar School in Co Cork used artificial intelligence to develop his system that detects “deepfake” videos, which have caused havoc on social media channels. It is quicker and more accurate than many of the state-of-the-art detection systems, the judges found.
There are two major categories of DeepFake detection tools:
- Pattern Analysis - looking and analyzing the behavior of people in the videos, learning the patterns, from hand gestures to pauses in speech, and comparing it to real life patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.
- Quality of Video Analysis - analyzing the differences between deepfakes and real videos. Most deepfake videos are created by merging individually generated frames into videos. By analyzing the essential data from the faces in individual frames of a video and then tracking them through sets of concurrent frames one is able to detect inconsistencies in the flow of the information from one frame to another. This can also be used for face audio detection.
- Such as in most tech heavily markets, end users are debating whether to work with an external vendor or to rely on internal developed capabilities as well as existing OSINT monitoring tools re-directed to a dedicated team. The report contains a detailed list and short profile of the companies that provide solutions for counter DeepFake and FakeNews monitoring and detection. Many of these companies are startups such as Cheq, Metafact, Cyabra, Falso Tech, Sensity, and others. In addition, large and mature corporations are also active in the market, some of them via M&A activities and other are developing solutions internally, such players include Axon, Microsoft, Facebook, Twitter and others. LINK
(soluções conjuntas)
Jan21
https://www.freethink.com/articles/generative-adversarial-networks
Some hope the arms race could be ended by approaching the problem from the other direction: authenticating legitimate videos. The main player in this space is Amber, which has created a blockchain-based video authentication system. The Amber system generates hashes from a video based on encoded data, which are stored on the Ethereum blockchain with associated timestamps. Comparing these hashes to those generated from another version of the video (such as a short clip from hours-long police bodycam footage) confirms whether it is identical or if it has been manipulated.
Researchers say they have found an efficient way for an AI algorithm like those used in biometrics to judge how confident it is with its decisions. The technique, which reportedly does not impact a model’s performance, could also be used to spot deepfakes.
The software can quickly report its decision, but also the confidence it has in the underlying input data and in the decision itself.
Armed with this information, users can decide in real time if they need to rework their model to get better quality output, according to researchers from the Massachusetts Institute of Technology and Harvard University. https://www.biometricupdate.com/202011/practical-way-to-have-ai-flag-its-own-uncertainty-reported-could-be-used-to-spot-deepfakes
"It’s a good thing that the 2020 election wasn’t swarmed by deepfakes, because attempts to automatically detect AI-generated video haven’t yet been very successful. When Facebook challenged researchers to create code to spot deepfakes, the winner, announced in June, missed more than a third of deepfakes in its test collection. When WIRED tested a prototype deepfake detector from the AI Foundation, results were mixed. The foundation’s Reality Defender service presents an easy-to-use interface for deepfake detection algorithms developed by Microsoft and leading AI labs. When asked to scrutinize fake Kim Jong-un, deepfake detectors from Microsoft and Technical University of Munich saw nothing awry, and Reality Defender reported “unlikely manipulated video.” The same happened with a fake video of President Trump created by graphics researcher and deepfake maker Hao Li. A deepfake that pasted the face of Elon Musk onto a journalist was flagged—but so was the unmanipulated original video. Hayden, of the AI Foundation, says it is adding new detection algorithms and experimenting with different ways to display their output. One recent addition, from the University of California, Berkeley, saw through the fake Kim ad. Another being tested, from a company founded by professors at the University of California Santa Barbara, is good at spotting signs of manipulation in the background of clips, she says, and is being tested on videos purporting to show suspicious activity during vote counting last week. The foundation is also thinking about how detectors can be kept up-to-date as deepfakes evolve, perhaps by maintaining ever-growing collections of AI falsity to retrain the fake-flagging algorithms."
https://www.wired.com/story/what-happened-deepfake-threat-election/
'KAICATCH', a new type of software, developed by researchers from the Korea Advanced Institute of Science and Technology, can determine exactly that.
The software was recently put into practical use for the first time in South Korea, and only the second time globally. It can find every minor change by analyzing the pixels like every copy, paste or delete. The green part indicates that this section has been edited. "The previous software could not detect random images. Out of 100 pictures, the accuracy was 5 to 10 percent. However, this new software has 70 to 80 percent accuracy." http://www.arirang.co.kr/News/News_View.asp?nseq=267974
Top AI-Based Tools & Techniques For Deepfake Detection
Truepic has developed a technology that is embedded in a smartphone. This technology generates a digital signature of and crytographically-sealed" provenance information of the images and videos captured using the phone. This technology authenticates information such as 3D depth map, date and time, geolocation, time the image/video was captured and the time it was edited to determine if the image has been manipulated. The company has partnered with Qualcomm and it has embedded its technology called Truepic Foresight technology in the company’s Snapdragon 865 5G Mobile Platform and it takes advantage of the chipset’s underlying hardware security. https://tech.hindustantimes.com/tech/news/your-smartphone-would-soon-be-able-to-identify-deepfakes-even-before-they-are-made-71603015963763.html + https://www.webpronews.com/qualcomm-fighting-misinformation-with-photo-validation-tool/
Researchers at Stanford University and UC Berkeley have devised a programme that uses artificial intelligence (AI) to detect deepfake videos. The programme is said to spot 80% fakes by recognising minute mismatches in the sounds people make and the shape of their mouths, according to the study titled ’Detecting Deep-Fake Videos from Phenome-Viseme Mismatches’. https://www.thehindu.com/sci-tech/technology/artificial-intelligence-helps-detect-deepfake-videos/article32885488.ece
The Idiap Research Institute has posted an announcement on their site looking for qualified candidates for a Ph.D. position researching ‘Deepfake detection and attribution.’ The project will be developed in Dr. Sebastien Marcel’s Biometric Security and Privacy lab at Idiap, and will focus on one class modeling, spatio-temporal learning, few-shot learning, and adversarial training. https://www.biometricupdate.com/202010/idiap-expanding-its-deepfake-detection-and-attribution-research-team
An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning. Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds. As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation. https://theconversation.com/in-a-battle-of-ai-versus-ai-researchers-are-preparing-for-the-coming-wave-of-deepfake-propaganda-146536
Still, some recent research promises to give the upper hand to the fake-detecting cats, at least for the time being. This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals,” the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye. https://spectrum.ieee.org/tech-talk/computing/software/blook-circulation-can-be-used-to-detect-deep-fakes
Como usar o Microsoft Video Authenticator em deepfakes https://tecnoblog.net/365145/como-usar-o-microsoft-video-authenticator-em-deepfakes/
Estonia-based Sentinel, which is developing a detection platform for identifying synthesized media (aka deepfakes), has closed a $1.35 million seed round from some seasoned angel investors — including Jaan Tallinn (Skype), Taavet Hinrikus (TransferWise), Ragnar Sass & Martin Henk (Pipedrive) — and Baltics early-stage VC firm, United Angels VC. The challenge of building tools to detect deepfakes has been likened to an arms race — most recently by tech giant Microsoft, which earlier this month launched a detector tool in the hopes of helping pick up disinformation aimed at November’s U.S. election. “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” it warned, before suggesting there’s still short-term value in trying to debunk malicious fakes with “advanced detection technologies.” https://techcrunch.com/2020/09/14/sentinel-loads-up-with-1-35m-in-the-deepfake-detection-arms-race/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAACcMCMBeLG5vge68osBo4wTU_w8Q9lGR2-i4jJLzyUquwRkzHfOJkcWMAArUlpQTDnfeRm8_mF5PuerOnXqbf0XgLkeL3wOk98WF-Dh8VHujSfgRWVthbA1E1-rfMaoU7biJAZpYSRz6g88hv2yCTS38rmbnKsXspIqiPbLyVjC7
Banks and fintech groups are entering partnerships with tech firms to tackle the use of fraudulent 'deepfake' content in biometric ID systems. The financial institutions are teaming up with identification startups like Mitek and iProov, according to The Financial Times. https://www.itpro.co.uk/security/cyber-security/357015/banks-and-fintech-firms-using-tech-firms-to-fight-deepfake-fraud
Sistema procura por batimentos cardíacos para identificar deepfakes. LINK
Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans. https://www.theverge.com/2021/10/27/22748508/adobe-deepfake-tool-max-project-morpheus
Cyabra, a data visualization software company that uncovers disinformation and empowers brands, today unveiled its latest innovation in fighting the spread of disinformation. Protecting global brands, international media outlets and the public sector, Cyabra’s solution uses advanced technology to identify fake profiles, manipulated images and deepfakes transmitted across the digital realm to root out bad actors. Using machine learning infused with NLP, Cyabra identifies behavioral patterns typical of fake profiles such as bots, trolls and sockpuppets. This powerful lens provides brands and media with the resources to understand narratives, discover trends and uncover more-in-depth insight into what drives consumers, helping companies make smarter decisions. The solution also uncovers bad actors bent on influencing public opinion and election campaigns. https://www.businesswire.com/news/home/20200728005750/en/Cyabra-Launches-New-Era-Fight-Disinformation-Deepfakes
Fijoy Vadakkumpadan, R&D manager at SAS, adds that in the wrong hands, however, deepfakes can help create fake news at an entirely new level. Nonetheless, he points out, if you know what to look for, there are telltale signs that indicate if something is actually a deepfake. “Remember that most often, deepfakes are used in order to engender strong emotions within the reader or viewer, such as anger or fear. Analytical tools are able to search for the signs that indicate something is amiss, and as these indicators are uncovered, these can be presented to the readers as they consume the news. https://www.itweb.co.za/content/WnpNgM2K3OjqVrGd
A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school. An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.https://www.biometricupdate.com/202007/deepfakes-some-progress-in-video-detection-but-its-back-to-the-basics-for-faked-audio
A potential fix is to utilize the blockchain. Blockchain is a distributed ledger that empowers you to store data online without the requirement for centralized servers. Besides, blockchains are tough against a large group of security threats that centralized data stores are vulnerable to. Distributed ledgers are not yet truly adept at putting away a lot of information, yet they’re ideal for putting away hashes and digital signatures. For example, individuals could utilize the blockchain to digitally sign and affirm the validness of a video or sound document that is related to them. The more individuals add their digital signature to that video, the more probable it will be considered as a real record. This is definitely not an ideal solution. It will require added measures to gauge and factor in the capability of the individuals who vote on a document. https://www.analyticsinsight.net/best-ways-prevent-deepfakes/
Jan21Blockchain é arma para identidade digital e deepfakes: entrevista com Jun Li, da Ontology
Facebook contest shows just how hard it is to detect deepfakes.
Facebook has revealed the winners of a competition for software that can detect deepfakes, doctored videos created using artificial intelligence. And the results show that researchers are still mostly groping in the dark when it comes to figuring out how to automatically identify them before they influence the outcome of an election or spark ethnic violence. The best algorithm in Facebook’s contest could accurately determine if a video was real or a deepfake just 65% of the time. https://fortune.com/2020/06/12/deepfake-detection-contest-facebook/ +https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/
New Blockchain Marketplace Aims to Tackle Morality Issues of Deepfake Media. Cointelegraph interviewed Arif Khan, CEO of blockchain marketplace Alethea AI, about how to address the legal and moral quagmire that “deepfakes” have created. https://cointelegraph.com/news/new-blockchain-marketplace-aims-to-tackle-morality-issues-of-deepfake-media
La startup Cyabra de Israel es una de los pioneras en la veloz identificación de ese tipo de piezas digitales. Gracias a su desarrollo, los videos falsos pueden eliminarse antes de que se viralicen. El director general de Cyabra, Dan Brahmy, informó que existen dos maneras de entrenar a un algoritmo informático para analizar la autenticidad de un video. http://www.aurora-israel.co.il/startup-israeli-detecta-videos-manipulados-por-expertos
ab20
Combater AI com AI? https://nocamels.com/2020/04/robot-vs-robot-can-ai-fight-fake-news/
ABr20
Mar20
Researchers are utilising artificial intelligence (AI) to develop an early warning system that can identify manipulated images, deepfake videos and disinformation online in 2020 US election.
The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections.
According to the study, published in the journal Bulletin of the Atomic Scientists, the scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. LINK
This Deepfake Detector May Be Critical for Deciding What’s Real Online (LINK)
No comments:
Post a Comment