jul31
After a staggering increase in the number of fake pornographic videos and images uploaded online in the last several years, Google on Wednesday announced new measures to assist victims and reduce the prominence of deepfakes in top search results.
The search engine also committed to taking steps to derank websites that frequently host the nonconsensual sexually explicit fake videos — also known as deepfakes — meaning that they may appear lower in search results. Deepfakes refer to misleading fake media, which has increasingly been created using artificial-intelligence tools.
Nonconsensual sexually explicit deepfakes often “swap” a victim’s face onto the body of a person in a pre-existing pornographic video. Generative AI tools have also been used to create fake but realistic sexually explicit images that depict real people, or “undress” real photos to make victims appear nude. The practice overwhelmingly affects women and girls, both public figures and, increasingly, girls in middle and high schools around the world.~
https://www.nbcnews.com/tech/tech-news/google-announces-news-steps-combat-sexually-explicit-deepfakes-rcna164560
jun24
FSBI: Deepfakes Detection with Frequency Enhanced Self-Blended Images
Abstract—Advances in deepfake research have led to the
creation of almost perfect manipulations undetectable by human
eyes and some deepfakes detection tools. Recently, several
techniques have been proposed to differentiate deepfakes from
realistic images and videos. This paper introduces a Frequency
Enhanced Self-Blended Images (FSBI) approach for deepfakes
detection. This proposed approach utilizes Discrete Wavelet
Transforms (DWT) to extract discriminative features from the
self-blended images (SBI) to be used for training a convolutional
network architecture model. The SBIs blend the image with itself
by introducing several forgery artifacts in a copy of the image
before blending it. This prevents the classifier from overfitting
specific artifacts by learning more generic representations. These
blended images are then fed into the frequency features extractor
to detect artifacts that can not be detected easily in the time
domain. The proposed approach has been evaluated on FF++
and Celeb-DF datasets and the obtained results outperformed
the state-of-the-art techniques with the cross-dataset evaluation
protocol. The code is available at https://github.com/gufranSabri/
FSBI.
(texto no arquivo)
mar24
South Korea’s police forces are developing a new deepfake detection tool that they can use during criminal investigations.
The Korean National Police Agency (KNPA) announced on March 5, 2024 to South Korean press agency Yonhap that its National Office of Investigation (NOI) will deploy new software designed to detect whether video clips or image files have been manipulated using deepfake techniques.
Unlike most existing AI detection tools, traditionally trained on Western-based data, the model behind this new software was trained on 5.2 million pieces of data from 5400 Koreans and related figures. It adopts “the newest AI model to respond to new types of hoax videos that were not pretrained,” KNPA said.
https://www.infosecurity-magazine.com/news/south-korea-police-deepfake/
fev24
Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.
Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing on to the accord.
“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.
https://apnews.com/article/ai-generated-election-deepfakes-munich-accord-meta-google-microsoft-tiktok-x-c40924ffc68c94fac74fa994c520fc06
fev24
Technology giants are planning a new industry “accord” to fight back against “deceptive artificial intelligence election content” that is threatening the integrity of major democratic elections across the world this year.
A draft Tech Accord, seen by POLITICO, showed technology companies want to work together to create tools like watermarks and detection techniques to spot, label and debunk “deepfake” AI-manipulated images and audio of public figures. The pledge also includes commitments to open up more about how the firms are fighting AI-generated disinformation on their platforms.
“We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders,” the draft reads.
https://www.politico.eu/article/tech-accord-industry-munich-security-conference-deepfake-ai-election-content/