Se, após a inspecção a olho nu, ainda persistirem dúvidas sobre a proveniência de uma imagem, alguns sites podem provar-se úteis na sua detecção. Estas ferramentas analisam as imagens e determinam, de forma imediata, o grau de probabilidade de terem sido geradas por IA.
O site AI or Not permite introduzir imagens a partir do disco ou a partir da cópia de URL e identifica criações feitas a partir dos geradores Stable Diffusion, MidJourney ou DALL-E. O seu criador, Andrey Doronichev, um russo residente em São Francisco, definiu-o como "uma máquina de raio X de aeroporto para conteúdo digital". AI or Not permite a análise gratuita de 20 imagens em formato JPG ou PNG. Para além de imagem, o site também analisa ficheiros de áudio.
O site Is It AI? permite a análise da proveniência de imagem e texto. O site“examina características de imagem, como padrões de cor, formas e texturas, comparando-as com imagens fotográficas reais e imagens geradas por IA” para determinar se foram geradas por IA. A sua utilização é gratuita e ilimitada.
O luminarty analisa gratuitamente um número infinito de imagens e texto, aferindo a probabilidade de terem origem em inteligência artificial. Os planos pagos incluem a classificação e localização da origem de imagens, texto e vídeos deepfake.
It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech
Abstract
Fight Fire With Fire: Why Recognizing And Mimicking Deepfakes' DNA Is The Way To Win
As generative AI developers such as ChatGPT, Dall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.
Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.
Some viral TikTok videos may soon show a new type of label: that it’s made by AI.
The ByteDance-owned app is developing a tool for content creators to disclose they used generative artificial intelligence in making their videos, according to a person with direct knowledge of the efforts. The move comes as people increasingly turn to AI-generated videos for creative expression, which has sparked copyright battles as well as concerns about misinformation.
https://www.theinformation.com/articles/tiktok-is-developing-ai-generated-video-disclosures-as-deepfakes-rise
jan23
Start-up DuckDuckGoose can spot deepfakes using artificial intelligence
dez22
A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.
And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”
For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.
Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a non-peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.
The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.
The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.
https://www.biometricupdate.com/202212/a-proposal-for-responsible-deepfakesnov22
Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.
FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.
https://www.theregister.com/2022/11/15/intel_fakecatcher/
nov22
"The researchers proposed that anybody planning to upload an image to the internet could run their photo through their program, basically immunizing it to AI image generators. (...) he system he helped develop only takes a few seconds to introduce noise into a photo. Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed. (..) Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models. (...) The researchers’ program proves that there are ways to defeat deepfakes before they happen. (,.,,) More so, creating these data poisoning systems will create an “arms race” between commercial AI image generators and those trying to prevent deepfakes. “It’s possible, if not likely, that in the future we’ll be able to evade whatever defenses you put on that one particular image,” Kamath said. “And once it’s out there, you can’t take it back.” Of course, there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated? https://gizmodo.com/deepfakes-ai-dall-e-ai-art-generator-1849764276
out22
Algoritmo detecta imagens e vídeos alterados com inteligência artificial
https://revistapesquisa.fapesp.br/deepfakes-o-novo-estagio-tecnologico-das-noticias-falsas/
out22
Segundo o professor, o problema está nos algoritmos que estas tecnologias usam, porque privilegiam as audiências e os cliques.
"O jornalismo pode estar perante um precipício profissional. Os conteúdos falsos tendem a difundir-se mais, têm mais visibilidade e por isso temos de estar muito atentos quando usamos este tipo de tecnologia", afirmou.
É neste contexto que a inteligência artificial (Artificial Intelligence - AI) pode ajudar na luta contra a desinformação, ao tentar encontrar os padrões e aplicá-los depois em situações diferentes.
"A AI é um observador incansável, aprende e aplica o conhecimento adquirido em situações futuras", explicou a este propósito, no debate, Juan Gomez Romero, especialista em inteligência artificial da Universidade de Granada.
A sua utilização pode ser determinante para identificar e combater os boatos e rumores, na medida em que há uma procura constante de novos formatos e públicos para os difundir, nomeadamente entre os jovens.
Para o jornalista Pablo Martinez, do `site` espanhol Maldita.es [um `site` de verificação de factos] também presente na mesa redonda, o desafio atual é precisamente o chamado entretenimento, porque os "desinformadores" usam cada vez mais estas formas para criar narrativas ocultas a que os jovens são permeáveis.
https://www.rtp.pt/noticias/economia/jornalismo-pode-estar-perante-um-precipicio-profissional-alerta-especialista_n1440973
out22
A recent study found that ordinary human observers and leading computer vision deepfake detection AI models are similarly accurate but make different types of mistakes. People who had access to machine model predictions were more accurate, suggesting that AI-assisted collaborative decision-making could be useful but will be unlikely to be foolproof.
Researchers found that when AI makes wrong predictions and humans have access to those models' predictions, humans end up revising their answers incorrectly. This suggests that machine predictions can affect human decision-making–an important factor when designing systems of human-AI collaboration.
The problem of falsified media existed long before these AI tools. Like any technological advance, people find both positive and negative applications. AI has created exciting new possibilities with applications in creative and filmmaking industries and, at the same time, raises the need for reliable detection, protection of privacy rights, and risk management against harmful use cases.
Current research suggests that humans and machine models are imperfect at detecting AI-altered videos. One answer may be a collaborative approach between AI and human detection in order to address the shortcomings of each. Since it is unlikely for any detection model to be foolproof, education about deepfake technology can help us become more aware that seeing is not always believing—a reality that was true long before the arrival of deepfake AI tools.
https://www.psychologytoday.com/us/blog/urban-survival/202210/are-humans-or-ai-better-detecting-deepfakes-videos
set22
Deepfakes detected via reverse modeling of the vocal tract are ‘comically’ non-human
ago22
Using Deep Learning to Detecting Deepfakes
(na pasta)
jul22
Researchers from Samsung Labs have developed a way to create high-resolution avatars, or deepfakes, from a single still frame photo or even a painting.
https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/
Advanced Machine Learning Techniques to Detect Various Types of Deepfakes
New method detects deepfake videos with up to 99% accuracy.
Two-pronged technique detects manipulated facial expressions and identity swaps. Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
Na última sexta-feira (29), o pesquisador e cientista Wang Weimin, de Singapura, recebeu um prêmio pelo primeiro lugar conquistado em um desafio de reconhecimento de deepfakes, ao desenvolver um modelo de inteligência artificial (IA) poderoso. A precisão do desenvolvimento de Weimin, que venceu outras 469 equipes de todo o mundo no evento com duração de cinco meses, foi de 98,53%.
Os desafios do Trusted Media Challenge, organizado pelo AI Singapore (um escritório do programa de IA da National Research Foundation), consistiam em detectar deepfakes, ou videoclipes alterados digitalmente. Dentre eles, conteúdos que apresentavam rostos, vozes ou ambos manipulados.
https://olhardigital.com.br/2022/04/30/seguranca/modelo-de-inteligencia-artifical-premiado-em-singapura-reconhece-deepfakes-com-985-de-precisao/
An Efficient Deepfake Video Detection Approach with Combination of EfficientNet and Xception Models Using Deep Learning
https://ieeexplore.ieee.org/abstract/document/9743542
One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”
Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/
set21
DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning.
DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning
[PDF] Fake News and AI: Fighting Fire with Fire?
set21
AI can detect a deepfake face because its pupils have jagged edges. Creating a fake persona online with a computer-generated face is easier than ever before, but there is a simple way to catch these phony pictures – look at the eyes. The inability of artificial intelligence to draw circular pupils gives away whether or not a face comes from a real photograph. Generative adversarial networks (GANs) – a type of AI that can generate images from a simple prompt – can produce realistic-looking faces. Because they are made through a process of continual …
mai21
From QAnon conspiracy theories to Russian government sponsored election interference, social
media disinformation campaigns are a part of online life, and identifying these threats amid the
posts that billions of social media users upload each day is a challenge. To help sort through
massive amounts of data, social media platforms are developing AI systems to automatically
remove harmful content primarily through text-based analysis. But these techniques won’t identify
all the disinformation on social media. After all, much of what people post are photos, videos, audio
recordings, and memes. Developing the entirely new AI systems necessary to detect such multimedia
disinformation will be difficult.
Meme warfare: AI countermeasures to disinformation should focus on popular,
not perfect, fakes
Michael Yankoski , Walter Scheirer and Tim Weninger
BULLETIN OF THE ATOMIC SCIENTISTS
2021, VOL. 77, NO. 3, 119–123
https://doi.org/10.1080/00963402.2021.1912093
No comments:
Post a Comment