Friday, March 20, 2020

Combater com tecnologia (universidades)

nov23

Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.

The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.

https://edition.cnn.com/2023/11/08/tech/meta-political-ads-ai-deepfakes/index.html

out23

As AI photo editing apps become more accessible and pervasive, software and hardware makers are building tools to help consumers verify the authenticity of an image starting from the moment of capture.

Driving the news: Leica announced Wednesday that its new M 11-P camera will be the first with the ability to apply Content Credentials from the moment an image is captured.

Why it matters: Adobe, Microsoft and others are adding metadata called Content Credentials to note when AI has been used to create or alter an image. But extending content verification all the way to the camera is seen as a critical step in the battle against deepfakes.
https://www.axios.com/2023/10/26/deepfakes-content-credentials-hardware-software

ago23

A method to restore deepfakes to the original content has been developed by a National Institute of Informatics team.

The misuse of these convincingly realistic fake videos created using artificial intelligence software has become an issue affecting society. For example, a person’s face can be replaced by that of a celebrity or politician and the voice and facial expressions can seem to show something the celebrity or politician never said or did.

https://japannews.yomiuri.co.jp/society/general-news/20230830-133239/

jun23

Computer scientists at the University of Waterloo figured out how to successfully fool voice authentication systems 99% of the time(Opens in a new window) using deepfake voice creation software.

Andre Kassis, a Computer Security and Privacy PhD candidate at Waterloo, who is also the lead author of this research study, explains how voice authentication works:

"When enrolling in voice authentication, you are asked to repeat a certain phrase in your own voice. The system then extracts a unique vocal signature (voiceprint) from this provided phrase and stores it on a server ... For future authentication attempts, you are asked to repeat a different phrase and the features extracted from it are compared to the voiceprint you have saved in the system to determine whether access should be granted."
https://www.pcmag.com/news/deepfake-software-fools-voice-authentication-with-99-success-rate

mai23

Tencent Cloud, a cloud service provider branch of China-based technology platform Tencent launched its new digital human production platform that will users to create deepfakes of anyone, stated Cointelegraph. It is expected that the deepfakes will be created with the reference from a three-minute video and about 100-sentence voice messages. 

Sources revealed that Tencent Cloud’s deepfake generator will use its own artificial intelligence (AI) methods for recreating fake videos of people. It is believed that with the help of deepfake videos fraud actions have taken place by impersonating prominent faces for misleading investors, Cointelegraph highlighted. 

https://www.financialexpress.com/business/blockchain-tencent-cloud-launches-its-deepfake-creation-tool-3070184/


ab23
All that said, Intel claims to have the solution: FakeCatcher, the world’s first real-time deepfake detector. It can — in milliseconds — detect fake videos with a reported 96% accuracy rate.
https://www.ravepubs.com/video-deepfakes-theres-a-new-sheriff-in-town/


fev23
The core of our “immunization” process is to leverage so-called adversarial attacks on these generative models. In particular, we implemented two different PhotoGuards, focused on latent diffusion models (like Stable Diffusion). For simplicity, we can think of such models as having two parts:
https://gradientscience.org/photoguard/

jan23

Two years ago, Microsoft's chief scientific officer Eric Horvitz, the co-creator of the spam email filter, began trying to solve this problem. "Within five or ten years, if we don't have this technology, most of what people will be seeing, or quite a lot of it, will be synthetic. We won't be able to tell the difference.

"Is there a way out?" Horvitz wondered.

Eventually, Microsoft and Adobe joined forces and designed a new feature called Content Credentials, which they hope will someday appear on every authentic photo and video.

Here's how it works:

Imagine you're scrolling through your social feeds. Someone sends you a picture of snow-covered pyramids, with the claim that scientists found them in Antarctica – far from Egypt! A Content Credentials icon, published with the photo, will reveal its history when clicked on.

"You can see who took it, when they took it, and where they took it, and the edits that were made," said Rao. With no verification icon, the user could conclude, "I think this person may be trying to fool me!"

https://www.cbsnews.com/news/creating-a-lie-detector-for-deepfakes-artificial-intelligence/


nov22

Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.

FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.
https://www.theregister.com/2022/11/15/intel_fakecatcher/
https://www.techspot.com/news/96655-intel-detection-tool-uses-blood-flow-identify-deepfakes.html

set22

Audio deepfakes potentially pose an even greater threat, because people often communicate verbally without video – for example, via phone calls, radio and voice recordings. These voice-only communications greatly expand the possibilities for attackers to use deepfakes.

To detect audio deepfakes, we and our research colleagues at the University of Florida have developed a technique that measures the acoustic and fluid dynamic differences between voice samples created organically by human speakers and those generated synthetically by computers.

https://theconversation.com/deepfake-audio-has-a-tell-researchers-use-fluid-dynamics-to-spot-artificial-imposter-voices-189104




ago22

A cybersecurity expert is puzzled by recent actions taken by a group of researchers working at the Samsung AI Centre in Moscow, saying their work might inevitably end up doing more harm than good.

In a research paper, they wrote that they have invented something called Mega Portraits, which is short for megapixel portraits, based on a concept called neural head avatars, which, they said, “offer a new fascinating way of creating virtual head models. They bypass the complexity of realistic physics-based modeling of human avatars by learning the shape and appearance directly from the videos of talking people.”

Lou Steinberg, the founder of CTM Insights, a New York City-based cybersecurity research lab and incubator, said intentionally edited images, also known as deepfakes, are a growing and troubling issue with possibilities that include editing a picture of someone to cause reputational/brand damage, often with AI tools that are becoming more capable.

https://www.itworldcanada.com/article/samsungs-handling-of-deepfakes-research-questioned/499236

jul22

Pesquisadores da Samsung Labs revelaram, na última semana, uma nova tecnologia de inteligência artificial que promete elevar o nível de criação de deepfakes. O método é capaz de gerar “imagens realistas de alta definição” de diferentes personalidades, utilizando uma única foto de origem.

Denominado “MegaPortraits”, o projeto se diferencia pela capacidade de criar avatares de cabeça neural mesmo quando a pessoa da foto original possui características físicas diferentes das encontradas no indivíduo cuja imagem será utilizada para fornecer os movimentos animados. Este é um grande desafio na aplicação da tecnologia.

https://www.tecmundo.com.br/internet/242413-tecnologia-samsung-cria-deepfakes-usando-qualquer-foto.htm



jul22

The proliferation of digitally altered videos of people, such as deepfakes, has seen scientists at the DSO National Laboratories devise tools to automatically detect them when they are used.

Since last year, the team has been employing artificial intelligence (AI) technology to pick up signs that may not be perceptible by humans.

These include poorly rendered fine details such as hair or unnatural lip movements.

https://www.straitstimes.com/singapore/dso-national-laboratories-mark-golden-jubilee-with-defence-tech-showcase


mai22

Google has quietly banned deepfake projects on its Colaboratory (Colab) service, putting an end to the large-scale utilization of the platform’s resources for this purpose.

Colab is an online computing resource that allows researchers to run Python code directly through the browser while using free computing resources, including GPUs, to power their projects.

Based on archive.org historical data, the ban took place earlier this month, with Google Research quietly adding deep fakes to the list of disallowed projects.

As noted on Discord by DFL developer ‘chervonij,’ those who attempt to train deepfakes on the Colab platform right now are served with the following error:

“You may be executing code that is disallowed, and this may restrict your ability to use Colab in the future. Please note the prohibited actions specified in our FAQ.”

The impact of this new restriction is expected to be far-reaching in the deepfake world, as many users utilize pre-trained models with Colab to jump-start their high-resolution projects.

Colab was making this process very easy even for those with no coding background, which is why so many tutorials suggest Google’s “free resource” platform to launch deepfake projects.

https://www.bleepingcomputer.com/news/google/google-quietly-bans-deepfake-training-projects-on-colab/

Google começou a banir os projetos envolvendo o treinamento de mecanismos de aprendizado de máquina para a criação de deepfakes na plataforma Colab, conforme relata o BleepingComputer nesta segunda-feira (30). As mudanças na política de uso do serviço teriam sido implementadas no início de maio.

Serviço de nuvem gratuito hospedado pela empresa de Mountain View, o Google Colaboratory permite que pesquisadores utilizem a linguagem Python diretamente do navegador, aproveitando diversos recursos de computação, incluindo GPUs. O objetivo é incentivar o desenvolvimento de projetos usando inteligência artificial (IA).

https://www.tecmundo.com.br/internet/239512-google-proibe-projetos-treinamento-deepfake.htm


mai22
We present a simple, yet general method to detect fake videos displaying human subjects, generated via Deep Learning techniques. The method relies on gauging the complexity of heart rate dynamics as derived from the facial video streams through remote photoplethysmography (rPPG). Features analyzed have a clear semantics as to such physiological behaviour. The approach is thus explainable both in terms of the underlying context model and the entailed computational steps. Most important, when compared to more complex state-of-the-art detection methods, results so far achieved give evidence of its capability to cope with datasets produced by different deep fake models.
https://link.springer.com/chapter/10.1007/978-3-031-06430-2_16


mai22

Nine of the Top 10 Liveness Detection Systems are Vulnerable to Deepfakes: Report



maio22

This software differentiates authentic images from deepfakes

With its technology, Truepic—a winner of Fast Company’s 2022 World Changing Ideas Awards—verifies the veracity of images, restoring trust in an age of disinformation.

This worrying democratization of synthetic media is the motivation behind Truepic Lens, a software development kit (SDK) that can fully integrate with apps that rely on images for their operations, allowing them to verify media in real time, and provide assurances to customers where necessary. Truepic also envisions a deeper cultural importance: to restore trust in a world rife with disinformation. When anything could be fake, what can be trusted? “We’re now looking at this future of: How do we operate where synthetic media is available to anyone?” says Jeffrey McGregor, Truepic’s CEO.
https://www.fastcompany.com/90739310/this-software-differentiates-authentic-images-from-deepfakes


maio22

New method detects deepfake videos with up to 99% accuracy.
Two-pronged technique detects manipulated facial expressions and identity swaps. Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
https://news.ucr.edu/articles/2022/05/03/new-method-detects-deepfake-videos-99-accuracy


ab22
BLOCKCHAIN

Being aware of potentially grave consequences, an alliance spanning the software, chips, cameras, and social media giants aims to create standards to ensure the authenticity of images and videos shared online. Known as the Coalition for Content Provenance and Authenticity (C2PA), the group consists of Photoshop developer Adobe, Microsoft, Intel and Twitter in cooperation with media outlet BBC and SoftBank-owned chip designer Arm, with the ultimate aim to fight deepfakes using blockchain technology.

Besides those, Japanese camera makers Sony and Nikon are a part of the coalition, to develop an open standard intended to work with any software showing evidence of tampering, as per Nikkei. Adobe’s content authenticity initiative’s senior director Andy Parsons even told Nikkei that we’ll “see many of these [features] emerging in the market this year. And I think in the next two years, we will see many sorts of end-to-end ecosystems.”

https://techhq.com/2022/04/sony-adobe-intel-among-tech-firms-taking-on-deepfakes-with-blockchain-technology/



ab22
 it is necessary to find methods to automatically identify misleading videos. In this paper, three categories of features (content features, uploader features and environment features) are proposed to construct a convolutional neural network (CNN) for misleading video detection. The experiment showed that all the three proposed categories of features play a vital role in detecting misleading videos.
https://www.nature.com/articles/s41598-022-10117-y



ab22
A team of researchers from the University of Federico II in Naples and the Technical University of Munich has created a new deepfake detection system that they believe could turn the tides in the battle against fraud. Unlike other deepfake detection systems, the new POI-Forensics system is not trained with any deepfake videos. It instead looks only at real videos of a subject, and then uses those videos to create a biometric profile of that individual.
https://findbiometrics.com/researchers-introduce-new-approach-deepfake-detection-040806/

abr22
Current text-to-speech algorithms produce realistic fakes of hu-man voices, making deepfake detection a much-needed area ofresearch. While researchers have presented various techniquesfor detecting audio spoofs, it is often unclear exactly why thesearchitectures are successful: Preprocessing steps, hyperparam-eter settings, and the degree of fine-tuning are not consistentacross related work. Which factors contribute to success, andwhich are accidental?In this work, we address this problem: We systematize au-dio spoofing detection by re-implementing and uniformly eval-uating architectures from related work. We identify overarchingfeatures for successful audio deepfake detection, such as usingcqtspecorlogspecfeatures instead ofmelspecfeatures, whichimproves performance by 37% EER on average, all other factors constant.

Does Audio Deepfake Detection Generalize?



mar22

In late January 2022, Estonia gained its sixth tech unicorn after identity verification startup Veriff raised $100 million in Series C funding
Veriff is an AI-assisted identity verification and know your customer (KYC) platform used by companies around the world to ensure their customers are who they claim to be.
Most of the company's biggest customers are in global fintech, where it faces competition from the likes of authentication and verification services Jumio and Fido. Deepfake technology poses significant challenges for consumer authentication and verification. One fast-growing startup hopes that AI can also be a solution.
https://www.zdnet.com/article/in-a-world-of-deepfakes-this-billion-dollar-startup-wants-you-to-trust-ai-powered-id-checks/




fev22


Deep Learning is an effective technique and used in various fields of natural language processing, computer vision, image processing and machine vision. Deep fakes uses deep learning technique to synthesis and manipulate image of a person in which human beings cannot distinguish the fake one. By using generative adversarial neural networks (GAN) deep fakes are generated which may threaten the public. Detecting deep fake image content plays a vital role. Many research works have been done in detection of deep fakes in image manipulation. The main issues in the existing techniques are inaccurate, consumption time is high. In this work we implement detecting of deep fake face image analysis using deep learning technique of fisherface using Local Binary Pattern Histogram (FF-LBPH). Fisherface algorithm is used to recognize the face by reduction of the dimension in the face space using LBPH. Then apply DBN with RBM for deep fake detection classifier
https://peerj.com/articles/cs-881/

jan22
ADOBE


An Adobe-led coalition of tech companies set up to combat deepfakes has released the first version of its technical specification for digital provenance. The Coalition for Content Provenance and Authenticity (C2PA), which also counts Microsoft, Arm, Intel TruePic and the BBC among its members, says the standard will allow content creators and editors to create media that can't secretly be tampered with. It allows them to selectively disclose information about who has created or changed digital content and how it has been altered. Platforms can define what information is associated with each type of asset - for example, images, videos, audio, or documents - along with how that information is presented and stored, and how evidence of tampering can be identified.
https://www.forbes.com/sites/emmawoollacott/2022/01/27/new-standard-aims-to-protect-against-deepfakes/?sh=6ea1bf7b265a + https://www.axios.com/adobes-effort-catch-deep-fakes-milestone-authentication-ffd71695-53d1-4ac1-8f1b-bca1330ac16c.html






jan22

Researchers from the Ruhr-University Bochum in Germany have released a new report with suggestions on how to tackle voice deepfakes through the use of a novel dataset.

The research focuses mainly on the “image domain” as the researchers claimed that studies exploring generated audio signals have so far been neglected by global research. To this end, Joel Frank and Lea Schönherr researched three different aspects of the audio deepfake challenge to “narrow this gap.”

The first consists of an introduction to common signal processing techniques used for analyzing audio signals, including how to read spectrograms for audio signals, and Text-To-Speech (TTS) models. https://www.biometricupdate.com/202201/a-new-idea-to-fight-voice-deepfakes-from-ruhr-university-bochum-researchers


dez21
A new tool is able to distinguish a Deepfake from a real video thanks to the analysis of the corneas. With this AI nothing can fool us anymore. It has always been said that the eyes are the mirror of the soul and it seems that the proverb is right again because just because of the distinctive brightness we have, an AI is able to differentiate a real video from one with Deepfake. The program that puts on our face on the body of another. Many have done it for entertainment or to see how it would look in a superhero movie. Although others have used it to put celebrities in porn or similar things. The worst use that Deepfake has been given is as promote disinformation campaigns. Lest we be fooled, the University at Buffalo has created the AI that knows if we are facing a real video or not. It all does it through eye analysis. This tool, which has been 94% successful in its tests, does an analysis of the corneas. https://cvbj.biz/the-glitter-of-the-eyes-is-the-trick-to-detect-deepfakes-with-this-ai-technology.html

nov21

One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”
Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/


------- AULA de 9/12/21 [tecnologias e redes sociais foram inseridas na mesma aula; documento da aula foi atualizado com material de redes sociais mas não de tecnologia, exceto quando indicado]

nov21

Germany’s federal government is expanding resources for a multi-year deepfake detection project that it is funding. Executives with BioID, a German biometric anti-spoofing vendor, say the firm has joined a consortium of organizations seeking effective methods of unmasking fraudulent, AI-based images, video and audio.

The company is joining several research organizations in the consortium, including the Fraunhofer Institute for Telecommunications’ Heinrich Hertz Institute, digital ID firm Bundesdruckerei and the Berlin Institute for Safety and Security Research. The project is funded by Germany’s Federal Ministry of Education and Research (BMBF). https://www.biometricupdate.com/202111/german-anti-deepfake-effort-takes-on-another-fighter
(incluido na aula de 9/12/21)

nov21
ADOBE
Adobe, which makes Photoshop, has launched a new “Verify” website that people can drag and drop online photos into to check what changes have been made. The move comes as a senior executive at the tech company told The Telegraph that editing software was now so sophisticated that people needed to view all images and videos online with the same skepticism as scam emails. (...) The company also announced at its recent Adobe Max conference that alongside the website, it was launching a new “content credentials” feature, which will let editors using their software publicly list all the changes made to photos. The company is working on a similar system for videos. https://www.telegraph.co.uk/news/2021/11/05/photoshopped-images-deepfakes-uncovered-new-website-shows-edits/ (incluido na aula de 9/12/21)

out21

the app allows the user, at the press of a button, to capture photos, video, or audio on their smartphone or tablet, that cannot be altered without detection.HUNTINGTON BEACH, CA, UNITED STATES, October 21, 2021 /EINPresswire.com/ -- ImageKeeper® LLC announced today the release of its free Consumer app, “ProveIt-Now!™”. The app allows the user, at the press of a button, to capture photos, video, or audio on their smartphone or tablet, that cannot be altered without detection. With the proliferation of deepfakes, manipulated video, and edited photos, the threat of being victimized is real and growing according to the FBI’s recent Private Industry Notification (PIN) 210310-001, dated 20 March 2021. https://www.einnews.com/pr_news/554437136/free-proveit-now-app-protects-against-deepfakes-media-manipulation-fraud


set21

Concept: Estonian startup Sentinel has released a solution to detect fake media content on the web. Its platform helps democratic governments, defense agencies, and enterprises stop the risk of AI-generated deepfakes with its protection technology. Nature of Disruption: Sentinel’s detection model is based on the Defense in Depth (DiD) approach. This model utilizes a multi-layer defense consisting of a vast database of deepfakes and neural network classifiers to detect deepfakes. Users need to upload digital media through the website or API. Sentinel’s system then analyzes the AI-forgery automatically to determine if the content, whether audio, video, or image is a deepfake or not. Finally, it shows a visualization of the manipulations done. https://www.medicaldevice-network.com/research-reports/sentinel-offers-ai-based-solution-to-detect-deepfakes/


set21
Microsoft's M12 fund is leading a $26 million investment round for Truepic, a San Diego-based startup trying to fight the emerging wave of digitally altered photos and videos, known colloquially as deepfakes.Why it matters: Already a problem, manipulated media is expected to become an even bigger threat in the coming years as technology makes it easier to modify video to make anyone say anything. https://www.axios.com/deepfake-foiling-startup-26m-microsoft-97d4eaf2-849b-43ca-b98b-3f3fead42660.html + https://www.morningbrew.com/emerging-tech/stories/2021/09/15/antideepfake-company-truepic-authenticates-images-rather-detecting-fakes?__cf_chl_jschl_tk__=pmd_dl84v7o8gnlzJvng9ukAFaboqzkwoA6KJlRS0kqeIHI-1631793550-0-gqNtZGzNApCjcnBszQcl (incluido na aula de 9/12/21)


aug21

Can Deepfake Fool Facial Recognition? A New Study Says Yes! Researchers at Sungkyunkwan University in South Korea tricked Amazon and Microsoft APIs with Deepfakes. Here's why that's worrying. Researchers at Sungkyunkwan University in Suwon, South Korea, tested the quality the current deepfake technology. They tested both Amazon and Microsoft APIs using open-source and commonly used deepfake video generating software to see how well they perform.

The researchers used the faces of Hollywood celebrities. In order to create solid deepfakes, the software needs a lot of high-quality images from different angles of the same persons, which are much easier to acquire of celebrities instead of ordinary people.

The researchers also decided to use Microsoft and Amazon’s API as the benchmarks for their study as both companies offer celebrity face recognition services. They used publicly available datasets and created just over 8,000 deepfakes. From each deepfake video, they extracted multiple faceshots and submitted it to the APIs is in question. https://www.makeuseof.com/deepfake-fool-facial-recognition/

jul21
In this paper, we investigate the potentials of image tagging in
serving the DeepFake provenance tracking. Specifically, we devise a
deep learning-based approach, named FakeTagger, with a simple yet
effective encoder and decoder design along with channel coding
to embed message to the facial image, which is to recover the
embedded message after various drastic GAN-based DeepFake
transformation with high confidence.
FakeTagger: Robust Safeguards against DeepFake Dissemination
via Provenance Tracking
Run Wang1,2,†, Felix Juefei-Xu3, Meng Luo4, Yang Liu5, Lina Wang1,2
1School of Cyber Science and Engineering, Wuhan University, China
2Key Laboratory of Aerospace Information Security and Trusted Computing, Ministry of Education, China
3Alibaba Group, USA 4Northeastern University, USA 5Nanyang Technological University, Singapore


jul21
How Blockchain Can Help Combat Disinformation
It’s not a cure-all, but it does have the potential to address many of the risks and root causes.
Blockchain has enormous potential: Blockchain systems use a decentralized, immutable ledger to record information in a way that’s constantly verified and re-verified by every party that uses it, making it nearly impossible to alter information after it’s been created. One of the most well-known applications of blockchain is to manage the transfer of cryptocurrencies like Bitcoin. But blockchain’s ability to provide decentralized validation and a clear chain of custody makes it potentially effective as a tool to track not just financial resources, but all sorts of forms of content. https://hbr.org/2021/07/how-blockchain-can-help-combat-disinformation

jul21

The drive to identify deepfakes in audiovisual media from genuine content has received a boost with a $700,000 international competition organised by AI Singapore, a national artificial intelligence (AI) programme under the National Research Foundation. The five-month-long Trusted Media Challenge aims to encourage AI enthusiasts and researchers around the world to design and test models and solutions that can detect modified audio and video, AI Singapore said in a statement on Thursday (July 15). By incentivising involvement of international contributors, and sourcing innovation ideas globally, the competition will also strengthen Singapore's position as a global AI hub, it added. https://www.straitstimes.com/tech/ai-singapore-launches-700k-competition-to-combat-deepfakes 


jul21

Voice biometrics: the new defense against deepfakes

https://www.techradar.com/news/voice-biometrics-the-new-defense-against-deepfakes

Jul21

Visimo has partnered with Florida State University to help the U.S. Air Force create a technology for detecting and preventing deepfakes, which refer to synthetic or heavily altered media made to mislead the public. The company said Thursday it will develop the Aletheia software under phase one of USAF’s Small Business Technology Transfer program. The Department of Defense regards deepfake as an issue that threatens national security, specifically the military’s reliance on open-source intelligence. Aletheia, named after the Greek goddess of truth, discovery and disclosure, is envisioned to detect a wider range of deepfake types compared to existing systems. The future software would do this by analyzing digital fingerprints within fake media. The Aletheia project marks Visimo’s third award in the past year under USAF’s STTR program, which partners small businesses with academic institutions for technology development projects. https://blog.executivebiz.com/2021/07/visimo-florida-state-university-to-develop-deepfake-detector-for-usaf/ 


jun21
The rising trend in deepfakes has raised concerns regarding the “destructive potential”, and the incidents have opened the global debate over fake posts and deepfakes, for which Kroop AI, a Gandhinagar-based startup, has developed a solution. Started in February 2021 by Jyoti Joshi, an AI scientist; and IIT alumni Milan Chaudhari and Sarthak Gupta, Kroop AI is a deployable AI-enabled platform for businesses and individuals to identify deepfakes across audio, video, or image data. The platform is deep learning-based, which detects and analyses deepfakes in detail across different platforms and information mediums. “There have already been several reports coming in stating the misuse of deepfake technology to commit millions of dollars of fraud,” says 33-year-old Jyoti. https://yourstory.com/2021/06/startup-bharat-gujarat-iit-ai-scientist-deepfakes/amp 

mai21
In this paper, we conduct a comprehensive review of deepfakes creation and detection technologies using deep learning approaches. In addition, we give a thorough analysis of various technologies and their application in deepfakes detection. Our study will be beneficial for researchers in this field as it will cover the recent state-of-art methods that discover deepfakes videos or images in social contents. In addition, it will help comparison with the existing works because of the detailed description of the latest methods and dataset used in this domain.
https://www.scirp.org/journal/paperinformation.aspx?paperid=109149


mai21

The University of Amsterdam (UvA) and the Netherlands Forensic Institute (NFI) started an investigation into the recognition of deepfakes, as well as, hidden messages from criminals. With deepfake technology, it is possible to impersonate someone in pictures or videos to the point where the viewer does not realize they are looking at a fake. “It is almost impossible to distinguish real from deepfake videos with the naked eye”, said Professor of Forensic Data, Zeno Geradts. Criminals are increasingly making use of deepfakes. Last month, Dutch, British and Baltic MPs had a conversation with a deepfake imitation of Russian opposition leader, Alexei Navalny. The politicians only realized the deception weeks after their talk occurred. https://nltimes.nl/2021/05/22/uva-nfi-launch-study-recognition-deepfakes


ma21
A team of researchers at the University of Southern California has released a report that suggests that deepfake detection systems have many of the same biases as more traditional facial recognition systems. In their report, the researchers evaluated three different deepfake detection systems, each of which was trained with the FaceForensics++ dataset and each of which was reputed to be able to identify deepfake videos with a high accuracy rate. https://findbiometrics.com/usc-researchers-warn-about-racial-bias-deepfake-detection-systems-051208/


mai21
Thus, new mechanisms and techniques to detect and filter out such deepfakes is the need of the hour.This paper exploits two powerful deep learning based CNN architectures namely, Inception-Resnet-v2 and XceptionNet for detecting the deepfakes. The proposed approach not only outshines the existing approaches in terms of efficiency and accuracy but also offers the best in terms of the given space and time complexity.
https://ieeexplore.ieee.org/abstract/document/9418477 

mai21

O Fawkes é uma ferramenta gratuita, desenvolvida por investigadores nos EUA, que usa algoritmos para alterar ligeiramente uma fotografia e impedir outros algoritmos de saber quem aparece na imagem.  Fawkes: “Cloaking” Your Photos to Elude Facial Recognition Systems http://mastersofmedia.hum.uva.nl/blog/2020/09/27/fawkes-cloaking-your-photos-to-get-away-from-the-facial-recognition-system/

may21

now the U.S. Army is introducing a lightweight deepfake detection method to preempt the national security concerns that will arise from the technology. “Due to the progression of generative neural networks, AI-driven deepfake advances so rapidly that there is a scarcity of reliable techniques to detect and defend against deepfakes,” explained C.-C. Jay Kuo, a professor of electrical and computer engineering at the University of Southern California. “There is an urgent need for an alternative paradigm that can understand the mechanism behind the startling performance of deepfakes and develop effective defense solutions with solid theoretical support.” https://www.datanami.com/2021/05/03/u-s-army-employs-machine-learning-for-deepfake-detection/ + https://www.helpnetsecurity.com/2021/05/07/defakehop-deepfake-detection-method/ 


abr21
A patent application published by Sony on April 22, 2021, reveals that the company is attempting to develop AI that will detect if a video has been deepfaked or otherwise tampered with. The patent application categorizes deepfake technology as “interesting and entertaining but potentially sinister.” However, there are methods for delving into an image or video and determining whether it has been altered, because certain artifacts or irregularities get left behind. These artifacts might include lighting or texture irregularities in a faked image that are not present in the original. Some of these might be discernible to a trained eye; others are undetectable unless processed through a neural network or AI program.
https://gamerant.com/sony-patent-anti-deepfake-videos/


mar21
DARPA:

In response to the rise in media manipulation, DARPA has taken on many technologically complex projects to identify manipulated media.Most notably, DARPA’s Media Forensics program invested in developing a quantitative integrity score for the authenticity of images and videos. “We framed our approaches in the Media Forensics program … around three levels of integrity,” Turek said, “Digital, physical and semantic integrity.” https://ndsmcobserver.com/2021/03/lecture-explores-deepfakes-media-manipulation/ 

Mar21
New Deepfake Spotting Tool Proves 94% Effective – Here’s the Secret of Its Success. https://scitechdaily.com/new-deepfake-spotting-tool-proves-94-effective-heres-the-secret-of-its-success/ Deepfakes can be detected by analyzing light reflections in eyes, scientists say. https://www.cnet.com/news/deepfakes-can-be-detected-by-analyzing-light-reflections-in-eyes-scientists-say/


fev21
finally, there is now a way to identify if the person in the image is the real one or not - all thanks to an online tool built by researchers from an Amsterdam-based visual threat intelligence companySensityhttps://www.digitalinformationworld.com/2021/02/this-new-tool-now-makes-it-easy-for.html


jan21
A new software programme that could make the internet a safer place has culminated in its developer Greg Tarr winning the 2021 BT Young Scientist & Technology Exhibition (BTYSTE).
The 17-year-old Leaving Certificate student at Bandon Grammar School in Co Cork used artificial intelligence to develop his system that detects “deepfake” videos, which have caused havoc on social media channels. It is quicker and more accurate than many of the state-of-the-art detection systems, the judges found.
https://www.irishtimes.com/news/science/young-scientist-cork-student-wins-with-programme-to-detect-deepfakes-1.4453484


dez20

There are two major categories of DeepFake detection tools:

  • Pattern Analysis - looking and analyzing the behavior of people in the videos, learning the patterns, from hand gestures to pauses in speech, and comparing it to real life patterns. This approach has the advantage of possibly working even if the video quality itself is essentially perfect.
  • Quality of Video Analysis - analyzing the differences between deepfakes and real videos. Most deepfake videos are created by merging individually generated frames into videos. By analyzing the essential data from the faces in individual frames of a video and then tracking them through sets of concurrent frames one is able to detect inconsistencies in the flow of the information from one frame to another. This can also be used for face audio detection.
  • Such as in most tech heavily markets, end users are debating whether to work with an external vendor or to rely on internal developed capabilities as well as existing OSINT monitoring tools re-directed to a dedicated team. The report contains a detailed list and short profile of the companies that provide solutions for counter DeepFake and FakeNews monitoring and detection. Many of these companies are startups such as Cheq, Metafact, Cyabra, Falso Tech, Sensity, and others. In addition, large and mature corporations are also active in the market, some of them via M&A activities and other are developing solutions internally, such players include Axon, Microsoft, Facebook, Twitter and others. LINK

abril20
(soluções conjuntas)


Jan21
Físicos???
https://www.freethink.com/articles/generative-adversarial-networks


“There is definitely more in the way of resources and research going into the generation side of things, and at the moment we definitely see a lack of balance,” said Henry Ajdar, head of threat intelligence at Deeptrace, the first company to bring a deepfake detector to market. “One thing we find quite frustrating at times and we hope will change is that companies who are developing new forms of AI-generated synthetic media aren’t necessarily providing people building tools to detect that media with privileged access to data that we could use to train our models. I think for the time being that adversarial dynamic is here to stay.”

Some hope the arms race could be ended by approaching the problem from the other direction: authenticating legitimate videos. The main player in this space is Amber, which has created a blockchain-based video authentication system. The Amber system generates hashes from a video based on encoded data, which are stored on the Ethereum blockchain with associated timestamps. Comparing these hashes to those generated from another version of the video (such as a short clip from hours-long police bodycam footage) confirms whether it is identical or if it has been manipulated.
Experts seem to agree that a multi-pronged approach is necessary: “The solution should not be one solution but a mix of different policies from different actors involved,” said Dr Elena Abrusci, an expert on technology and human rights at the University of Essex. “There should be more transparency and efforts from the tech companies. We believe [media literacy] is important but we also don’t think it’s the solution to all problems, we don’t want to put the burden on the individual so it’s just their responsibility to detect deepfakes.” LINK

dez20

Los resultados muestran que aunque se han creado softwares de detección de este tipo de fakes, aún no son de acceso libre y gratuito.
Alfabetización moral digital para la detección de deepfakes y fakes audiovisuales
Víctor Cerdán Martínez1; María Luisa García Guardia2; Graciela Padilla Castillo3
Evaluado: 17/04/2020 / Aceptado: 25/05/2020

nov20

Researchers say they have found an efficient way for an AI algorithm like those used in biometrics to judge how confident it is with its decisions. The technique, which reportedly does not impact a model’s performance, could also be used to spot deepfakes.

The software can quickly report its decision, but also the confidence it has in the underlying input data and in the decision itself.

Armed with this information, users can decide in real time if they need to rework their model to get better quality output, according to researchers from the Massachusetts Institute of Technology and Harvard University. https://www.biometricupdate.com/202011/practical-way-to-have-ai-flag-its-own-uncertainty-reported-could-be-used-to-spot-deepfakes 



nov20
AINDA Não ESTA A RESULTAR 

"It’s a good thing that the 2020 election wasn’t swarmed by deepfakes, because attempts to automatically detect AI-generated video haven’t yet been very successful. When Facebook challenged researchers to create code to spot deepfakes, the winner, announced in June, missed more than a third of deepfakes in its test collection. When WIRED tested a prototype deepfake detector from the AI Foundation, results were mixed. The foundation’s Reality Defender service presents an easy-to-use interface for deepfake detection algorithms developed by Microsoft and leading AI labs. When asked to scrutinize fake Kim Jong-un, deepfake detectors from Microsoft and Technical University of Munich saw nothing awry, and Reality Defender reported “unlikely manipulated video.” The same happened with a fake video of President Trump created by graphics researcher and deepfake maker Hao Li. A deepfake that pasted the face of Elon Musk onto a journalist was flagged—but so was the unmanipulated original video. Hayden, of the AI Foundation, says it is adding new detection algorithms and experimenting with different ways to display their output. One recent addition, from the University of California, Berkeley, saw through the fake Kim ad. Another being tested, from a company founded by professors at the University of California Santa Barbara, is good at spotting signs of manipulation in the background of clips, she says, and is being tested on videos purporting to show suspicious activity during vote counting last week. The foundation is also thinking about how detectors can be kept up-to-date as deepfakes evolve, perhaps by maintaining ever-growing collections of AI falsity to retrain the fake-flagging algorithms."

https://www.wired.com/story/what-happened-deepfake-threat-election/



nov20
A video of dinosaurs roaming the streets. Is this real or fake?
'KAICATCH', a new type of software, developed by researchers from the Korea Advanced Institute of Science and Technology, can determine exactly that.
The software was recently put into practical use for the first time in South Korea, and only the second time globally. It can find every minor change by analyzing the pixels like every copy, paste or delete. The green part indicates that this section has been edited. "The previous software could not detect random images. Out of 100 pictures, the accuracy was 5 to 10 percent. However, this new software has 70 to 80 percent accuracy.http://www.arirang.co.kr/News/News_View.asp?nseq=267974

nov20

Top AI-Based Tools & Techniques For Deepfake Detection
https://analyticsindiamag.com/top-ai-based-tools-techniques-for-deepfake-detection/


out20
De plus en plus faciles à réaliser, les deep fakes font l’objet de nombreuses recherches. À l’occasion de la conférence « Cybersec & AI 2020 », Hany Farid, professeur à l’université de Berkeley, a détaillé deux techniques intéressantes qu’il a co-développé récemment. Pour détecter rapidement ces usurpations, l’idée est d’identifier une personne à protéger au travers de ses mouvements faciaux. La seconde technique cible plus spécifiquement les deep fake de type « lip-sync », où la zone buccale est modifiée par un réseau neuronal pour coller avec un texte alternatif. Hany Farid et ses confrères ont imaginé une technique qui analyse la cohérence entre les phonèmes et les visèmes, c’est-à-dire entre les sons et les expressions faciales élémentaires qui apparaissent lors de la parole. https://www.01net.com/actualites/ces-chercheurs-debusquent-les-deepfakes-grace-aux-mouvements-incoherents-du-visage-1988579.html

out20
Although lots of efforts have been devoted to detect deepfakes, their performance drops significantly on previously unseen but related manipulations and the detection generalization capability remains a problem. Motivated by the fine-grained nature and spatial locality characteristics of deepfakes, we propose Locality-Aware AutoEncoder (LAE) to bridge the generalization gap. In the training process, we use a pixel-wise mask to regularize local interpretation of LAE to enforce the model to learn intrinsic representation from the forgery region, instead of capturing artifacts in the training set and learning superficial correlations to perform detection. We further propose an active learning framework to select the challenging candidates for labeling, which requires human masks for less than 3% of the training data, dramatically reducing the annotation efforts to regularize interpretations. https://dl.acm.org/doi/abs/10.1145/3340531.3411892


oout20
Detecting Deepfakes uploaded to the internet will become
a crucial task in the coming years as Deepfakes continue
to improve. An undetected Deepfake video could have
immense negative consequences on our society because of
the pace and spread of information consumption. A general
Deepfake could be applied to videos across information
platforms to identify whether a video is authentic. Wanting
to tackle this problem, I attempted to create a general detector
using Deep Learning techniques. I was able to improve
the accuracy of today’s best detection models on this
particular dataset using two different models. In addition I
implemented a recurrent network inspired by other authors
to evaluate its generalizability on the multi-faker dataset I
was working with. A generalized Deepfake detector was not
found.
In Search of a Generic Deepfake Detector
Sigurthor Bjorgvinsson (APOIOS)


ou20

Truepic has developed a technology that is embedded in a smartphone. This technology generates a digital signature of and crytographically-sealed" provenance information of the images and videos captured using the phone. This technology authenticates information such as 3D depth map, date and time, geolocation, time the image/video was captured and the time it was edited to determine if the image has been manipulated. The company has partnered with Qualcomm and it has embedded its technology called Truepic Foresight technology in the company’s Snapdragon 865 5G Mobile Platform and it takes advantage of the chipset’s underlying hardware security. https://tech.hindustantimes.com/tech/news/your-smartphone-would-soon-be-able-to-identify-deepfakes-even-before-they-are-made-71603015963763.html + https://www.webpronews.com/qualcomm-fighting-misinformation-with-photo-validation-tool/ 


out20
Researchers at Stanford University and UC Berkeley have devised a programme that uses artificial intelligence (AI) to detect deepfake videos. The programme is said to spot 80% fakes by recognising minute mismatches in the sounds people make and the shape of their mouths, according to the study titled ’Detecting Deep-Fake Videos from Phenome-Viseme Mismatches’. https://www.thehindu.com/sci-tech/technology/artificial-intelligence-helps-detect-deepfake-videos/article32885488.ece

ou20
McAfee, the device-to-cloud cybersecurity company, today announced the launch of the McAfee Deepfakes Lab to provide traditional and social media organizations advanced Artificial Intelligence (AI) analysis of suspected deepfake videos. These videos could be used to spread reputation-damaging lies about individuals and organizations during the 2020 U.S. election season and beyond. https://apnews.com/press-release/business-wire/technology-business-science-corporate-news-north-america-9fb5653a4aa844df8f9cd647c3395c48


ou20

The Idiap Research Institute has posted an announcement on their site looking for qualified candidates for a Ph.D. position researching ‘Deepfake detection and attribution.’ The project will be developed in Dr. Sebastien Marcel’s Biometric Security and Privacy lab at Idiap, and will focus on one class modeling, spatio-temporal learning, few-shot learning, and adversarial training. https://www.biometricupdate.com/202010/idiap-expanding-its-deepfake-detection-and-attribution-research-team


out20

An investigative journalist receives a video from an anonymous whistleblower. It shows a candidate for president admitting to illegal activity. But is this video real? If so, it would be huge news – the scoop of a lifetime – and could completely turn around the upcoming elections. But the journalist runs the video through a specialized tool, which tells her that the video isn’t what it seems. In fact, it’s a “deepfake,” a video made using artificial intelligence with deep learning. Journalists all over the world could soon be using a tool like this. In a few years, a tool like this could even be used by everyone to root out fake content in their social media feeds. As researchers who have been studying deepfake detection and developing a tool for journalists, we see a future for these tools. They won’t solve all our problems, though, and they will be just one part of the arsenal in the broader fight against disinformation. https://theconversation.com/in-a-battle-of-ai-versus-ai-researchers-are-preparing-for-the-coming-wave-of-deepfake-propaganda-146536


out20
research for the DeFake project – a joint RIT/University of South Carolina project to develop an advanced, for-journalists-only deepfake detection software program.
https://www.wvik.org/post/deepfakes-how-one-reporter-fared-trying-outthink-misininformation#stream/0; Together, they are building a software designed specifically to help journalists ferret out deepfakes – videos so believable that even those charged with vetting reality could inadvertently share them.https://www.southcarolinapublicradio.org/post/deepfakes-how-usc-fighting-stay-ahead-misinformation 


set20
Researchers show how a person’s heartbeat can reveal whether a video is real.

Still, some recent research promises to give the upper hand to the fake-detecting cats, at least for the time being. This work, done by two researchers at Binghamton University (Umur Aybars Ciftci and Lijun Yin) and one at Intel (Ilke Demir), was published in IEEE Transactions on Pattern Analysis and Machine Learning this past July. In an article titled, “FakeCatcher: Detection of Synthetic Portrait Videos using Biological Signals,” the authors describe software they created that takes advantage of the fact that real videos of people contain physiological signals that are not visible to the eye. https://spectrum.ieee.org/tech-talk/computing/software/blook-circulation-can-be-used-to-detect-deep-fakes 


set20
Como usar o Microsoft Video Authenticator em deepfakes https://tecnoblog.net/365145/como-usar-o-microsoft-video-authenticator-em-deepfakes/ 

set20

Estonia-based Sentinel, which is developing a detection platform for identifying synthesized media (aka deepfakes), has closed a $1.35 million seed round from some seasoned angel investors — including Jaan Tallinn (Skype), Taavet Hinrikus (TransferWise), Ragnar Sass & Martin Henk (Pipedrive) — and Baltics early-stage VC firm, United Angels VC. The challenge of building tools to detect deepfakes has been likened to an arms race — most recently by tech giant Microsoft, which earlier this month launched a detector tool in the hopes of helping pick up disinformation aimed at November’s U.S. election. “The fact that [deepfakes are] generated by AI that can continue to learn makes it inevitable that they will beat conventional detection technology,” it warned, before suggesting there’s still short-term value in trying to debunk malicious fakes with “advanced detection technologies.” https://techcrunch.com/2020/09/14/sentinel-loads-up-with-1-35m-in-the-deepfake-detection-arms-race/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAACcMCMBeLG5vge68osBo4wTU_w8Q9lGR2-i4jJLzyUquwRkzHfOJkcWMAArUlpQTDnfeRm8_mF5PuerOnXqbf0XgLkeL3wOk98WF-Dh8VHujSfgRWVthbA1E1-rfMaoU7biJAZpYSRz6g88hv2yCTS38rmbnKsXspIqiPbLyVjC7


set20

Banks and fintech groups are entering partnerships with tech firms to tackle the use of fraudulent 'deepfake' content in biometric ID systems. The financial institutions are teaming up with identification startups like Mitek and iProov, according to The Financial Times. https://www.itpro.co.uk/security/cyber-security/357015/banks-and-fintech-firms-using-tech-firms-to-fight-deepfake-fraud 


set20

Sistema procura por batimentos cardíacos para identificar deepfakes.  LINK


set20
Microsoft acaba de revelar o Video Authenticator, uma tecnologia que utiliza um algoritmo próprio para verificar a autenticidade de um clipeA invenção, segundo a própria companhia, é capaz de analisar clipes e fotografias, prestando atenção em elementos que seriam invisíveis ao olho humano. Com base na análise, ele fornece uma “nota de confiança” para que o próprio usuário possa decidir se é válido acreditar naquele conteúdo ou não. O algoritmo seria altamente eficaz na detecção de deep fakes, afirma a Microsoft. https://canaltech.com.br/software/microsoft-cria-algoritmo-que-detecta-videos-manipulados-e-deepfakes-170916/ + https://blogs.microsoft.com/on-the-issues/2020/09/01/disinformation-deepfakes-newsguard-video-authenticator/ + 
Microsoft unveils a weapon to combat ‘deepfakes’ in the US elections https://www.explica.co/microsoft-unveils-a-weapon-to-combat-deepfakes-in-the-us-elections-technology/ 

ago20
A team of researchers from MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have proposed a new model that is designed to spot deepfakes by looking at subtle visual artifacts such as textures in hair, backgrounds, and faces, and visualizing image regions where it has detected manipulations https://syncedreview.com/2020/08/26/detecting-deepfakes-mit-csail-model-identifies-manipulations-using-local-artifacts/ An MIT project reportedly has found that sophisticated deepfakes can be detected. In a finding that will be cheered by anyone concerned about a world in which nothing but face-to-face can be trusted, MIT researchers found that even the best deepfake systems leave behind telltale artifacts. That trick was pulled off in the school’s Computer Science and Artificial Intelligence Lab where a lot of biometrics advances, including in facial recognition, is being produced. https://www.biometricupdate.com/202009/a-good-closeup-can-detect-the-most-clever-ai-deepfakes-today

ago20
Now Adobe is hoping the tool can protects us all from the faked images that are a little too real for comfort. The company is working with high profile partners like Twitter and the New York Times on a Content Authenticity Initiative that could add new levels of metadata to the image that certifies it’s the real deal, Wired reports. https://www.trustedreviews.com/news/new-adobe-photoshop-feature-could-save-the-world-from-deepfakes-4045421 + https://tecnoblog.net/359428/adobe-trabalha-em-tag-que-identifica-fotos-falsificadas-via-photoshop/ At the company’s annual Max conference today, it unveiled a prototype tool named Project Morpheus that demonstrates both the potential and problems of integrating deepfake techniques into its products.

Project Morpheus is basically a video version of the company’s Neural Filters, introduced in Photoshop last year. These filters use machine learning to adjust a subject’s appearance, tweaking things like their age, hair color, and facial expression (to change a look of surprise into one of anger, for example). Morpheus brings all those same adjustments to video content while adding a few new filters, like the ability to change facial hair and glasses. Think of it as a character creation screen for humans. https://www.theverge.com/2021/10/27/22748508/adobe-deepfake-tool-max-project-morpheus


ago20
Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation. Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years. The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause. https://www.biometricupdate.com/202008/deepfakes-declared-top-ai-threat-biometrics-and-content-attribution-scheme-proposed-to-detect-them


jul20

Cyabra, a data visualization software company that uncovers disinformation and empowers brands, today unveiled its latest innovation in fighting the spread of disinformation. Protecting global brands, international media outlets and the public sector, Cyabra’s solution uses advanced technology to identify fake profiles, manipulated images and deepfakes transmitted across the digital realm to root out bad actors. Using machine learning infused with NLP, Cyabra identifies behavioral patterns typical of fake profiles such as bots, trolls and sockpuppets. This powerful lens provides brands and media with the resources to understand narratives, discover trends and uncover more-in-depth insight into what drives consumers, helping companies make smarter decisions. The solution also uncovers bad actors bent on influencing public opinion and election campaigns. https://www.businesswire.com/news/home/20200728005750/en/Cyabra-Launches-New-Era-Fight-Disinformation-Deepfakes

jul20
Deepfakes are becoming more authentic owing to the interaction of two computer algorithms to create perfect 'fake' images and videos, and humans are simply unable to gauge which is real or not. Researchers now propose a new method called 'frequency analysis' that can efficiently expose fake images created by computer algorithms. "In the era of fake news, it can be a problem if users don't have the ability to distinguish computer-generated images from originals," said Professor Thorsten Holz from the Chair for Systems Security at Ruhr-Universitat Bochum in Germany. Deepfake images are generated with the help of computer models, so-called Generative Adversarial Networks (GANs). https://www.sentinelassam.com/business/scientists-put-machines-on-job-to-decide-whether-image-is-a-fake-or-not-489631

jul20
At Deeptrace, we have seen a sharp increase in the number of these images being deployed as part of malicious activities, with notable examples contributing to fraud, espionage, and coordinated disinformation operations. The potential damages these activities could cause individuals and organisations are significant, and, unlike some other forms of deepfakes, are already an established problem. As the first-to-market deepfake detection product, Deeptrace’s RESTful API provides the leading automated solution for detecting deepfakes, including the images of fake faces generated by StyleGAN. It is powered by Deeptrace’s proprietary deep learning technology to identify “unnatural fingerprints” left in pixels by the generators. If a face is present in the input image, the API indicates whether it was generated by a GAN, together with a confidence score for the analysis. Additionally, the detector can also attribute a fake image to a specific implementation of a GAN, e.g. a StyleGAN2. ??? LINK


jul20
Previous attempts to do this have focused on statistical methods. A new approach (well, new in its application to deepfakes) focuses on something called discrete cosine transform, first invented in 1972 for use in the signal processing community. This frequency analysis technology examines deepfakes, which are created using machine-learning models called generative adversarial networks (GANs), to find specific artifacts in the high-frequency range. The method can be used to determine whether an image has been created using machine learning techniques. “We chose a different approach than previous research by converting the images into the frequency domain using the discrete cosine transformation,” Joel Frank from the Horst Görtz Institute for IT Security at Ruhr-Universität Bochum told Digital Trends. “[As a result, the images are] expressed as the sum of many different cosine functions. Natural images consist mainly of low-frequency functions. In contrast, images generated by GANs exhibit artifacts in the high-frequency range — for example, a grid structure emerges in the frequency domain. Our approach detects these artifacts.” https://www.digitaltrends.com/news/researchers-new-way-spot-deepfakes/



jul20
Using deep neural networks (a machine learning technique), it’s become increasingly easy to convincingly manipulate images and videos of people by doctoring their speech, movements, and appearance. In response, researchers have created an algorithm that generates an adversarial attack against facial manipulation systems in order to corrupt and render useless attempted deepfakes. The researchers’ algorithm allows users to protect media before uploading it to the internet by overlaying an image or video with an imperceptible filter. https://www.futurity.org/deepfake-videos-protective-filter-2397732/

jul20

Fijoy Vadakkumpadan, R&D manager at SAS, adds that in the wrong hands, however, deepfakes can help create fake news at an entirely new level. Nonetheless, he points out, if you know what to look for, there are telltale signs that indicate if something is actually a deepfake. “Remember that most often, deepfakes are used in order to engender strong emotions within the reader or viewer, such as anger or fear. Analytical tools are able to search for the signs that indicate something is amiss, and as these indicators are uncovered, these can be presented to the readers as they consume the newshttps://www.itweb.co.za/content/WnpNgM2K3OjqVrGd

jul20

A pair of developments are being reported in efforts to thwart deepfake video and audio scams. Unfortunately, in the case of digitally mimicked voice attacks, the advice is old school. An open-access paper published by SPIE, an international professional association of optics and photonics, reports on a new algorithm reportedly has scored a precision rate in detecting deepfake video of 99.62 percent. It reportedly was accurate 98.21 percent of the time.https://www.biometricupdate.com/202007/deepfakes-some-progress-in-video-detection-but-its-back-to-the-basics-for-faked-audio


JUL20
The use of blockchain for deepfakes detection can help reduce instances of the spread of misinformation by helping verify the authenticity of videos. https://www.bbntimes.com/technology/can-blockchain-prevent-deepfakes
OU20

A potential fix is to utilize the blockchain. Blockchain is a distributed ledger that empowers you to store data online without the requirement for centralized servers. Besides, blockchains are tough against a large group of security threats that centralized data stores are vulnerable to. Distributed ledgers are not yet truly adept at putting away a lot of information, yet they’re ideal for putting away hashes and digital signatures. For example, individuals could utilize the blockchain to digitally sign and affirm the validness of a video or sound document that is related to them. The more individuals add their digital signature to that video, the more probable it will be considered as a real record. This is definitely not an ideal solution. It will require added measures to gauge and factor in the capability of the individuals who vote on a document. https://www.analyticsinsight.net/best-ways-prevent-deepfakes/ 

Jan21
Blockchain é arma para identidade digital e deepfakes: entrevista com Jun Li, da Ontology
https://beincrypto.com.br/blockchain-e-arma-para-identidade-digital-e-deepfakes-entrevista-com-jun-li-da-ontology/


Jun20

Facebook contest shows just how hard it is to detect deepfakes.
Facebook has revealed the winners of a competition for software that can detect deepfakes, doctored videos created using artificial intelligence. And the results show that researchers are still mostly groping in the dark when it comes to figuring out how to automatically identify them before they influence the outcome of an election or spark ethnic violence. The best algorithm in Facebook’s contest could accurately determine if a video was real or a deepfake just 65% of the time. https://fortune.com/2020/06/12/deepfake-detection-contest-facebook/ +https://ai.facebook.com/blog/deepfake-detection-challenge-results-an-open-initiative-to-advance-ai/



21/6
Reuters, the world’s largest multimedia news provider, today announced the expansion of its award-winning e-learning course on helping newsrooms around the world spot deepfakes and manipulated media in 12 additional languages. https://www.reuters.com/article/rpb-fbdeepfakecourselanguages/reuters-expands-deepfake-course-to-16-languages-in-partnership-with-facebook-journalism-project-idUSKBN23M1QY

mai20

New Blockchain Marketplace Aims to Tackle Morality Issues of Deepfake Media. Cointelegraph interviewed Arif Khan, CEO of blockchain marketplace Alethea AI, about how to address the legal and moral quagmire that “deepfakes” have created. https://cointelegraph.com/news/new-blockchain-marketplace-aims-to-tackle-morality-issues-of-deepfake-media


mai20
Can blockchain save us from deepfakes? https://decrypt.co/28258/can-blockchain-save-us-from-deepfakes

ab20
La startup Cyabra de Israel es una de los pioneras en la veloz identificación de ese tipo de piezas digitales. Gracias a su desarrollo, los videos falsos pueden eliminarse antes de que se viralicen. El director general de Cyabra, Dan Brahmy, informó que existen dos maneras de entrenar a un algoritmo informático para analizar la autenticidad de un video. http://www.aurora-israel.co.il/startup-israeli-detecta-videos-manipulados-por-expertos

ab20
Combater AI com AI? https://nocamels.com/2020/04/robot-vs-robot-can-ai-fight-fake-news/

ABr20
Researchers at the University of Notre Dame are working on a project to combat disinformation online, including media campaigns to incite violence, sow discord, and meddle in democratic elections.  The team of researchers relied on artificial intelligence (AI) to develop an early warning system. The system will be able to identify manipulated images, deepfake videos, and disinformation online. It is a scalable, automated system that uses content-based image retrieval. It can then apply computer-vision based techniques to identify political memes on multiple social media networks https://www.unite.ai/early-warning-system-for-disinformation-developed-with-ai/

Mar20
Researchers are utilising artificial intelligence (AI) to develop an early warning system that can identify manipulated images, deepfake videos and disinformation online in 2020 US election.
The project is an effort to combat the rise of coordinated social media campaigns to incite violence, sew discord and threaten the integrity of democratic elections.
According to the study, published in the journal Bulletin of the Atomic Scientists, the scalable, automated system uses content-based image retrieval and applies computer vision-based techniques to root out political memes from multiple social networks. LINK
This Deepfake Detector May Be Critical for Deciding What’s Real Online (LINK)
Researchers at the University of California, Riverside have developed a computer system that has been trained to recognize altered images at the pixel level with extreme precision, combating these very real looking deepfakes.” (LINK)
29/10/19 NOVO “Unsurprisingly, there is a rising number of newly formed startups claiming to be able to tackle the problem of deepfakes.” (LINK)
The Knight Foundation, a freedom of speech advocate, donated $5 million to UW and $50 million in total to similar organizations across the country for this very purpose. LINK 
Adobe deteta?

Mar20Vídeos manipulados utilizando a inteligência artificial são encontrados cada vez mais na internet. Porém a tecnologia traz riscos para a sociedade. Engenheiros suíços desenvolvem tecnologias para detectar e combater as manipulações na rede. https://www.swissinfo.ch/por/manipula%C3%A7%C3%B5es-na-m%C3%ADdia_como-pesquisadores-su%C3%AD%C3%A7os-tentam-reconhecer--fake-news-/45598306

Combater com tecnologia:
Such giveaways include low-level pixilation patterns that a machine learning algorithm could spot, says Mr Turek, as well as inconsistencies in light, shadow and geometry. Factual analysis can also be performed, such as comparing outside images or video with data about weather or the sun angle at the time. However, researchers acknowledge that malicious actors will evolve their tactics, leading to a cat-and-mouse game. LINK

5/11/2019 Adobe, Twitter, NYT launch effort to fight deepfakes:  Adobe, Twitter and the New York Times are proposing a new industry effort designed to make clear who created a photo or video and what changes have been made (LINK)

14/11/2019 New technique for detecting deepfake videos
Jan20 CertiVid is an upcoming video certifying platform to aid the public in the fight against misinformation. Misinformation can come in many forms and has typically had the intent to sow chaos and disorder in the public LINK

Similar to how a deepfake is created, Amber Video uses artificial intelligence and signal processing to identify whether an audio or video file has been maliciously altered. The two-year-old company has attracted a loyal client base, mostly journalists, Allibhai said. However, he’s not optimistic that the fighting-fire-with-fire approach will win out in the long run (LINK)
Mar20 Deepfake images of people look real. They pose in realistic settings and, in the case of videos, can emote almost naturally. However, everything about deepfakes is synthetic – just a series of codes that come together to form an image of a person who doesn’t exist. Freddie Witherden, an assistant professor in the Department of Ocean Engineering at Texas A&M University, wrote in a recent paper that the images and bots behind the falsified faces are not without fault. https://today.tamu.edu/2020/03/06/detecting-deepfakes/

26/11/2019 O que pode o jornalismo fazer; o medo dos jornalistas; o impacto no jornalismo: https://www.niemanlab.org/2019/11/what-should-newsrooms-do-about-deepfakes-these-three-things-for-starters/

REGULAR O USO: Alethea vai lançar rede descentralizada para proteger direitos digitais de vídeos de deepfake; Com a Synthetic Content Network, Alethea está tentando introduzir um mecanismo para divulgar claramente o conteúdo gerado por IA e permitir seu uso apenas com o consentimento da pessoa que está sendo mostrada. A plataforma usa a tecnologia blockchain para manter registros de propriedade e permissão de uso, além de garantir que os criadores possuam os direitos sobre o software de IA que usam. Ele será alimentado pelo token nativo de Alethea, que visa introduzir incentivos para as várias interações entre os atores do ecossistema LINK
JAN20 Researchers from Digimarc Corporation (NASDAQ: DMRC), inventor of the Digimarc Platform for digital identification and detection, will present details of a system for mitigating the problem of Deepfake news videos using digital watermarking at Electronic Imaging 2020 in Burlingame, CA, on Tuesday, January 28, 2020.  LINK

As fears grow that deepfakes will be used to sow discord ahead of the 2020 U.S. presidential election, startups, government agencies, and academics are working to develop a method to combat doctored videos and photographs https://cacm.acm.org/news/238495-deepfakes-trigger-a-race-to-fight-manipulated-photos-videos/fulltext

McAfee is developing a deepfake detection framework, according to the researchers, using computer vision and deep learning techniques. However, no specifics were provided as to whether major social media platforms have begun using tools to spot false content. https://siliconangle.com/2020/02/26/never-ending-war-security-experts-battle-malware-deepfakes-nation-states/
AI: ““Machine learning algorithms are great at recognising patterns in large amounts of data. ML can provide a way to detect fake audio from real audio by using classification techniques that work by showing an algorithm large amounts of deepfake and real audio and teaching it to distinguish the difference in (for example) the frequency composition between the two. For example, by using image classification on the audio spectrograms you can teach an ML model to ‘spot the difference’. However, as far as I am aware no out-of-the-box solution exists yet.” (LINK)


JAN20 Can blockchain help fight deepfakes? LINK

MAR20 The blockchain may have a solution. According to Amy James of Alexandria Labs, one of the fundamental problems of the web is that there is no public index. Today when we search the web, we’re searching a private index. This makes detecting changes to search rankings, or the de-platforming of certain ideas and even individuals, very difficult to determine. LINK

Jan20Microsoft: pesquisadores da Microsoft e da Universidade de Pequim (China) desenvolveram dois programas bem distintos para troca de rostos: um que torna ainda melhor a qualidade da inserção digital do rosto de alguém em um vídeo, e outro que ajuda a detectar se um vídeo é mesmo real ou se foi algo manipulado digitalmente (...) Em contraste com esta tecnologia está o Face X-Ray, um aplicativo desenvolvido para detectar se uma foto é real ou se ela foi manipulada digitalmente — uma tecnologia que se revela cada vez mais necessária, já que é cada vez mais comum a criação de contas falsas nas redes sociais que utilizam nas fotos de perfil rostos sintetizados por uma IA.” (LINK)

Plataformas não conseguem As the volume of videos and the sophistication of the technology increases, the social and digital platforms will no longer be able to delay or waiver on their response. Beyond simple falsehoods, the implications of deepfakes go much further because of what they can represent, and especially misrepresent. (LINK)
O papel das plataformas As of now, Facebook has decided to remain consistent with the application of their own disinformation policy, leaving the fake Zuckerberg video live on Instagram. For companies looking to defend against deepfakes, they cannot wait for regulators to catch up. It’s important to start monitoring your brand’s digital presence for any form of impersonation from early warning of account takeover attempts to detection of spoofed sites that abuse your trademarks and brand. Computer vision, including object detection, leveraging machine learning technologies to achieve processing scale and accuracy, is crucial. (LINK)
12/12/19 Facebook:  The competition launched today, with an announcement at the AI conference NeurIPS, and will accept entries through March 2020. Facebook has dedicated more than US $10 million for awards and grants. (LINK)
7/1/20 Facebook said on Monday that it would ban videos that are heavily manipulated by artificial intelligence, known as deepfakes, from its platform. In a blog post, a company executive said Monday evening that the social network would remove videos altered by artificial intelligence in ways that “would likely mislead someone into thinking that a subject of the video said words that they did not actually say.” The policy will not extend to parody or satire, the executive, Monika Bickert, said, nor will it apply to videos edited to omit or change the order of words.Ms. Bickert said all videos posted would still be subject to Facebook’s system for fact-checking potentially deceptive content. And content that is found to be factually incorrect appear less prominently on the site’s news feed and is labeled false. LINK +https://about.fb.com/news/2020/01/enforcing-against-manipulated-media/
Jan20 Elogios e receios da iniciativa do FB: House Permanent Select Committee on Intelligence Chairman Rep. Adam Schiff (D-CA) said Facebook’s announcement this past week of its “new policy which will ban intentionally misleading deepfakes from its platforms is a sensible and responsible step, and I hope that others like YouTube and Twitter will follow suit.” LINK
JAN20 Hany Farid, a Berkeley professor of electrical engineering and computer sciences, was one of the researchers Facebook approached last year. The company ultimately invested $7.5 million with Berkeley, Cornell University and the University of Maryland to develop technology to spot the deepfakes.  In a brief interview, Farid, who has a joint appointment at the School of Information, said manipulated videos, which often portray politicians and celebrities saying or doing things they didn’t do, pose a serious threat to society. “The videos are clearly designed to be misleading and harmful to the individuals/political parties,” Farid wrote in an email. “I believe that these types of fraudulent videos should be banned because the harm outweighs any possible benefit.” LINK



No comments:

Post a Comment