Sunday, May 16, 2021

Pode a AI combater desinformação (não( provocada pela AI? (VER TECNOLOGIA)

dez23

Se, após a inspecção a olho nu, ainda persistirem dúvidas sobre a proveniência de uma imagem, alguns sites podem provar-se úteis na sua detecção. Estas ferramentas analisam as imagens e determinam, de forma imediata, o grau de probabilidade de terem sido geradas por IA.

O site AI or Not permite introduzir imagens a partir do disco ou a partir da cópia de URL e identifica criações feitas a partir dos geradores Stable Diffusion, MidJourney ou DALL-E. O seu criador, Andrey Doronichev, um russo residente em São Francisco, definiu-o como "uma máquina de raio X de aeroporto para conteúdo digital". AI or Not permite a análise gratuita de 20 imagens em formato JPG ou PNG. Para além de imagem, o site também analisa ficheiros de áudio.

site Is It AI? permite a análise da proveniência de imagem e texto. O site“examina características de imagem, como padrões de cor, formas e texturas, comparando-as com imagens fotográficas reais e imagens geradas por IA” para determinar se foram geradas por IA. A sua utilização é gratuita e ilimitada.

luminarty analisa gratuitamente um número infinito de imagens e texto, aferindo a probabilidade de terem origem em inteligência artificial. Os planos pagos incluem a classificação e localização da origem de imagens, texto e vídeos deepfake.

Estes são apenas três exemplos entre as muitas opções disponíveis online. Outros sites, como o do Content at Scale ou Detecting-AI, também podem ser utilizados com o mesmo intuito.

https://www.publico.pt/2023/12/12/p3/noticia/fotografia-imagem-gerada-ia-aprende-distinguir-2072804?utm_source=notifications&utm_medium=web&utm_campaign=2072804

out23

It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech

Michael Hameleers, Franziska Marquart

Abstract


Mis- and disinformation labels are increasingly weaponized and used as delegitimizing accusations targeted at mainstream media and political opponents. To better understand how such accusations can affect the credibility of real information and policy preferences, we conducted a two-wave panel experiment (Nwave2 = 788) to assess the longer-term effect of delegitimizing labels targeting an authentic video message. We find that exposure to an accusation of misinformation or disinformation lowered the perceived credibility of the video but did not affect policy preferences related to the content of the video. Furthermore, more extreme disinformation accusations were perceived as less credible than milder misinformation labels. The effects lasted over a period of three days and still occurred when there was a delay in the label attribution. These findings indicate that while mis- and disinformation labels might make authentic content less credible, they are themselves not always deemed credible and are less likely to change substantive policy preferences.

https://ijoc.org/index.php/ijoc/article/view/20777

set23

Fight Fire With Fire: Why Recognizing And Mimicking Deepfakes' DNA Is The Way To Win

https://www.forbes.com/sites/forbestechcouncil/2023/09/11/fight-fire-with-fire-why-recognizing-and-mimicking-deepfakes-dna-is-the-way-to-win/?sh=47030fd567c7


mai23

As generative AI developers such as ChatGPTDall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.

Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

AI industry experts speaking at the MIT Technology Review's EmTech Digital conference this week weighed in on how generative AI companies are dealing with a variety of ethical and practical hurdles as even as they push ahead on developing the next generation of the technology
https://www.computerworld.com/article/3695508/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html

maio23

Some viral TikTok videos may soon show a new type of label: that it’s made by AI.

The ByteDance-owned app is developing a tool for content creators to disclose they used generative artificial intelligence in making their videos, according to a person with direct knowledge of the efforts. The move comes as people increasingly turn to AI-generated videos for creative expression, which has sparked copyright battles as well as concerns about misinformation.
https://www.theinformation.com/articles/tiktok-is-developing-ai-generated-video-disclosures-as-deepfakes-rise



jan23

Start-up DuckDuckGoose can spot deepfakes using artificial intelligence


https://innovationorigins.com/en/start-up-duckduckgoose-can-spot-deepfakes-using-artificial-intelligence/

 dez22

A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.

And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”

For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.

Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a non-peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.

The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.

The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.

https://www.biometricupdate.com/202212/a-proposal-for-responsible-deepfakes

nov22
Machine learning’s recent developments have given rise to the phenomenon of “deep fakes,” which is a cause for serious concern. Deepfakes are the phenomenon of creating fake digital items that look like real images, and a string of films has sprung up on social media in the past and coming years. Deepfakes are easily generated by anyone and can be easily distributed on social media owing to a low level of technical expertise. This paper aims to review various techniques that are used for the detection of deepfakes; many of the authors have used data sets of images and videos like FaceFroensics++, thispersondoesnotexist.com, etc. In various studies, some authors have used their own data for analysis. To accurately detect deepfakes, there is a need for smart technology. The study aims to show the existing technologies like convolutional neural networks, recurrent neural networks and support vector machines in the detection of deepfakes. In recent months, machine learning has gained popularity for detecting plausible face swaps in films that leave minimal indications of tampering or deepfake movies. Thus, to combat deepfakes, there is a need for efficient algorithms for detection in an early stage to stop blackmail, political unrest, etc.
https://link.springer.com/chapter/10.1007/978-981-19-5037-7_46 


nov22

Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.

FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.
https://www.theregister.com/2022/11/15/intel_fakecatcher/
https://www.techspot.com/news/96655-intel-detection-tool-uses-blood-flow-identify-deepfakes.html



nov22

"The researchers proposed that anybody planning to upload an image to the internet could run their photo through their program, basically immunizing it to AI image generators. (...) he system he helped develop only takes a few seconds to introduce noise into a photo. Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed. (..) Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models.  (...) The researchers’ program proves that there are ways to defeat deepfakes before they happen. (,.,,) More so, creating these data poisoning systems will create an “arms race” between commercial AI image generators and those trying to prevent deepfakes. “It’s possible, if not likely, that in the future we’ll be able to evade whatever defenses you put on that one particular image,” Kamath said. “And once it’s out there, you can’t take it back.” Of course, there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated? https://gizmodo.com/deepfakes-ai-dall-e-ai-art-generator-1849764276


out22

Algoritmo detecta imagens e vídeos alterados com inteligência artificial

https://revistapesquisa.fapesp.br/deepfakes-o-novo-estagio-tecnologico-das-noticias-falsas/


out22

Segundo o professor, o problema está nos algoritmos que estas tecnologias usam, porque privilegiam as audiências e os cliques.

"O jornalismo pode estar perante um precipício profissional. Os conteúdos falsos tendem a difundir-se mais, têm mais visibilidade e por isso temos de estar muito atentos quando usamos este tipo de tecnologia", afirmou.

É neste contexto que a inteligência artificial (Artificial Intelligence - AI) pode ajudar na luta contra a desinformação, ao tentar encontrar os padrões e aplicá-los depois em situações diferentes.

"A AI é um observador incansável, aprende e aplica o conhecimento adquirido em situações futuras", explicou a este propósito, no debate, Juan Gomez Romero, especialista em inteligência artificial da Universidade de Granada.

A sua utilização pode ser determinante para identificar e combater os boatos e rumores, na medida em que há uma procura constante de novos formatos e públicos para os difundir, nomeadamente entre os jovens.

Para o jornalista Pablo Martinez, do `site` espanhol Maldita.es [um `site` de verificação de factos] também presente na mesa redonda, o desafio atual é precisamente o chamado entretenimento, porque os "desinformadores" usam cada vez mais estas formas para criar narrativas ocultas a que os jovens são permeáveis.

https://www.rtp.pt/noticias/economia/jornalismo-pode-estar-perante-um-precipicio-profissional-alerta-especialista_n1440973


out22

A recent study found that ordinary human observers and leading computer vision deepfake detection AI models are similarly accurate but make different types of mistakes. People who had access to machine model predictions were more accurate, suggesting that AI-assisted collaborative decision-making could be useful but will be unlikely to be foolproof.

Researchers found that when AI makes wrong predictions and humans have access to those models' predictions, humans end up revising their answers incorrectly. This suggests that machine predictions can affect human decision-making–an important factor when designing systems of human-AI collaboration.

The problem of falsified media existed long before these AI tools. Like any technological advance, people find both positive and negative applications. AI has created exciting new possibilities with applications in creative and filmmaking industries and, at the same time, raises the need for reliable detection, protection of privacy rights, and risk management against harmful use cases.

Current research suggests that humans and machine models are imperfect at detecting AI-altered videos. One answer may be a collaborative approach between AI and human detection in order to address the shortcomings of each. Since it is unlikely for any detection model to be foolproof, education about deepfake technology can help us become more aware that seeing is not always believing—a reality that was true long before the arrival of deepfake AI tools.

https://www.psychologytoday.com/us/blog/urban-survival/202210/are-humans-or-ai-better-detecting-deepfakes-videos


set22

Deepfakes detected via reverse modeling of the vocal tract are ‘comically’ non-human


https://www.biometricupdate.com/202209/deepfakes-detected-via-reverse-modeling-of-the-vocal-tract-are-comically-non-human

ago22

Using Deep Learning to Detecting Deepfakes

(na pasta)

jul22

Researchers from Samsung Labs have developed a way to create high-resolution avatars, or deepfakes, from a single still frame photo or even a painting.

https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/




jun22

Deepfake Detection through Deep Learning

jun22
http://www.dspace.dtu.ac.in:8080/jspui/handle/repository/19172


mai22

Advanced Machine Learning Techniques to Detect Various Types of Deepfakes

https://dl.acm.org/doi/abs/10.1145/3494109.3527196


mai22
Limits and Possibilities for “Ethical AI” in Open Source:
A Study of Deepfakes
ABSTRACT
Open source software communities are a significant site of AI development,
but “Ethical AI” discourses largely focus on the problems
that arise in software produced by private companies. Design, policy
and tooling interventions to encourage “Ethical AI” based on
studies in private companies risk being ill-suited for an open source
context, which operates under radically different organizational
structures, cultural norms, and incentives.
In this paper, we show that significant and understudied harms
and possibilities originate from differing practices of transparency
and accountability in the open source community. We conducted
an interview study of an AI-enabled open source Deepfake project
to understand how members of that community reason about the
ethics of their work. We found that notions of the “Freedom 0” to
use code without any restriction, alongside beliefs about technology
neutrality and technological inevitability, were central to how
community members framed their responsibilities, and the actions
they believed were and were not available to them. We propose
a continuum between harms resulting from how a system is implemented
versus how it is used, and show how commitments to
radical transparency in open source allow great ethical scrutiny for
harms wrought by implementation bugs, but allow harms through
(mis)use to proliferate, requiring a deeper toolbox for disincentivizing
harmful use. We discuss how an assumption of control over
downstream uses is often implicit in discourses of “Ethical AI”, but
outline alternative possibilities for action in cases such as open
source where this assumption may not hold.

mai22
There are calls for three types of defensive response: regulation, technical controls, and improved digital or media literacy. Each is problematic by itself. This article asks what kind of literacy can address deepfake harms, proposing an artificial intelligence (AI) and data literacy framework to explore the potential for social learning with deepfakes and identify sites and methods for intervening in their cultures of production. 
https://journals.sagepub.com/doi/abs/10.1177/14614448221093943

maio22

New method detects deepfake videos with up to 99% accuracy.
Two-pronged technique detects manipulated facial expressions and identity swaps. Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
https://news.ucr.edu/articles/2022/05/03/new-method-detects-deepfake-videos-99-accuracy

ab22

Na última sexta-feira (29), o pesquisador e cientista Wang Weimin, de Singapura, recebeu um prêmio pelo primeiro lugar conquistado em um desafio de reconhecimento de deepfakes, ao desenvolver um modelo de inteligência artificial (IA) poderoso. A precisão do desenvolvimento de Weimin, que venceu outras 469 equipes de todo o mundo no evento com duração de cinco meses, foi de 98,53%.

Os desafios do Trusted Media Challenge, organizado pelo AI Singapore (um escritório do programa de IA da National Research Foundation), consistiam em detectar deepfakes, ou videoclipes alterados digitalmente. Dentre eles, conteúdos que apresentavam rostos, vozes ou ambos manipulados.

https://olhardigital.com.br/2022/04/30/seguranca/modelo-de-inteligencia-artifical-premiado-em-singapura-reconhece-deepfakes-com-985-de-precisao/


abr22
An Efficient Deepfake Video Detection Approach with Combination of EfficientNet and Xception Models Using Deep Learning
https://ieeexplore.ieee.org/abstract/document/9743542



fev22

Deep Learning is an effective technique and used in various fields of natural language processing, computer vision, image processing and machine vision. Deep fakes uses deep learning technique to synthesis and manipulate image of a person in which human beings cannot distinguish the fake one. By using generative adversarial neural networks (GAN) deep fakes are generated which may threaten the public. Detecting deep fake image content plays a vital role. Many research works have been done in detection of deep fakes in image manipulation. The main issues in the existing techniques are inaccurate, consumption time is high. In this work we implement detecting of deep fake face image analysis using deep learning technique of fisherface using Local Binary Pattern Histogram (FF-LBPH). Fisherface algorithm is used to recognize the face by reduction of the dimension in the face space using LBPH. Then apply DBN with RBM for deep fake detection classifier
https://peerj.com/articles/cs-881/

nov21


One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”

Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/

set21

DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning. 

DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning

S Zobaed, MF Rabby, MI Hossain, E Hossain, S Hasan… - arXiv preprint arXiv …, 2021

set21

AI can detect a deepfake face because its pupils have jagged edges. Creating a fake persona online with a computer-generated face is easier than ever before, but there is a simple way to catch these phony pictures – look at the eyes. The inability of artificial intelligence to draw circular pupils gives away whether or not a face comes from a real photograph. Generative adversarial networks (GANs) – a type of AI that can generate images from a simple prompt – can produce realistic-looking faces. Because they are made through a process of continual …

Read more: https://www.newscientist.com/article/2289815-ai-can-detect-a-deepfake-face-because-its-pupils-have-jagged-edges/#ixzz764WXBLuc

 mai21

From QAnon conspiracy theories to Russian government sponsored election interference, social

media disinformation campaigns are a part of online life, and identifying these threats amid the

posts that billions of social media users upload each day is a challenge. To help sort through

massive amounts of data, social media platforms are developing AI systems to automatically

remove harmful content primarily through text-based analysis. But these techniques won’t identify

all the disinformation on social media. After all, much of what people post are photos, videos, audio

recordings, and memes. Developing the entirely new AI systems necessary to detect such multimedia

disinformation will be difficult. 

Meme warfare: AI countermeasures to disinformation should focus on popular,

not perfect, fakes

Michael Yankoski , Walter Scheirer and Tim Weninger

BULLETIN OF THE ATOMIC SCIENTISTS

2021, VOL. 77, NO. 3, 119–123

https://doi.org/10.1080/00963402.2021.1912093


No comments:

Post a Comment