Monday, May 17, 2021

Ganhar dinheiro (investimento; publicidade)



ab23

(ARTISTA a favor desde que lhe paguem...)

While some music artists and their labels are increasingly becoming wary of those using AI models to clone their voices, others are choosing to ride the wave and embrace an AI-filled future.

Experimental artist and futurist Claire "Grimes" Boucher, for one, is looking at the new trend as an opportunity.

"I'll split 50 percent royalties on any successful AI-generated song that uses my voice," the artist tweeted in response to a story about Aubrey "Drake" Graham's voice being cloned for a viral song. "Same deal as I would with any artist I collab with."

"Feel free to use my voice without penalty," she added. "I have no label and no legal bindings."

In fact, Grimes admitted she was already "making a program that should simulate my voice well" in a followup, "but we could also upload stems and samples for people to train their own."
https://futurism.com/the-byte/grimes-split-royalties-deepfake
mar23

dez22

Trey Parker and Matt Stone, creators of South Park and various other media over the years, have raised $20 million to continue work on their professional deepfake studio for creators, Deep Voodoo.

The company got its start during the media shutdown of 2020, when the pandemic prevented most travel and on-set productions. Parker and Stone had already begun assembling an AI artist team for a film they were developing, and when COVID intervened they focused on creating the tools for use later.

“We stumbled upon this amazing technology and ended up recruiting the best deepfake artists in the world,” Stone said in an announcement on Deep Voodoo’s site. I’ve reached out for more info and will update this post if I hear back.

https://techcrunch.com/2022/12/21/south-park-creators-deepfake-video-startup-deep-voodoo-conjures-20m-in-new-funding/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAJ58j4xLZIRmjmeDsB0DtI3X8wZw2Vp0ln2lV2Cw-1Jksio0fMlzkUKXgIrlk0O_nqbkYf7H2NrGe_Sb_ClfCthAwWq0_qnCNZQWtoitVemKZntqkxjPsBkuxJm02h0uwXQZYFZQN52Qm2uk9oi6wxW11Anwbb7MGM35Xy6BuQpV


out22

Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission

Digital simulations of Elon Musk, Tom Cruise, Leo DiCaprio and others have shown up in ads, as the image-melding technology grows more popular and presents the marketing industry with new legal and ethical questions


https://www.wsj.com/articles/deepfakes-of-celebrities-have-begun-appearing-in-ads-with-or-without-their-permission-11666692003

Deepfakes in advertising could change the industry, creating new legal and ethical issues

https://mezha.media/en/2022/10/26/deepfakes-in-advertising-could-change-the-industry-creating-new-legal-and-ethical-issues/


mar22


Early last year, MyHeritage made use of deepfake technology to bring old photos of deceased loved ones to life. Deep Nostalgia went viral and more than 100 million animations have since been created. Now the genealogy company is back with LiveStory, which adds vocal storytelling to the mix.
"LiveStory takes storytelling to the next level," said MyHeritage founder and CEO, Gilad Japhet. "With this latest viral feature, MyHeritage continues to lead the world of online family history in both vision and innovation. Our use of AI to breathe new life into historical photos is unique and is helping millions of people cultivate a renewed emotional connection with their ancestors and deceased loved ones. Genealogy is all about telling and preserving our family stories. We keep showing the world how fun and compelling genealogy can be." https://newatlas.com/computers/myheritage-livestory/


fev22

Deepdub, an AI-based entertainment localisation startup based out of Tel Aviv, Israel has raised USD 20 million in Series A funding led by New York-based global venture capital and private equity firm Insight Partners, with participation from existing investors Booster Ventures and Stardom Ventures and new investors Swift VC. 

Angel investors joining this round include Emiliano Calemzuk (former President of Fox Television Studios), Kevin Reilly (former CCO of HBO Max), Danny Grander (co-founder of Snyk), Roi Tiger (VP, Engineering at Meta), Gideon Marks and Daniel Chadash.

The fresh funds will be used to expand the global reach of the company’s sales and delivery teams. Deepdub plans on strengthening the R&D team with excellent researchers and developers and will improve its deep-learning based localisation platform

https://analyticsindiamag.com/startup-that-uses-deepfakes-for-movie-dubbing-raises-usd-20-mn-in-series-a/






jan22
COMO INVESTIMENTO

Metaphysic, the company behind the Tom Cruise deepfakes, has raised $7.5 million. The London-based company develops artificial intelligence for hyperreal virtual experiences in the metaverse, the universe of virtual worlds that are all interconnected, like in novels such as Snow Crash and Ready Player One. https://venturebeat.com/2022/01/25/metaphysic-ai-startup-behind-tom-cruise-deepfakes-raises-7-5m/


jan22
PUBLICIDADE)
Con ShahRukh Khan donde respaldó a
las pequeñas empresas locales en el anuncio. "Este Diwali, compre en la
tienda cercana Choice of Fashion". "… compre en la
tienda cercana Ajkal Fashion". "…Royal Fashion…" "…MK Cloths…" para ayudar a las pequeñas empresas de la India en su recuperación de COVID porque no tienen acceso
a lo que las grandes empresas poseen, y eso es, embajadores de marcas superestrellas ". ..¡Haciendo que el embajador de marca más grande de la India
sea su embajador de marca!" Pero, ¿cómo Cadbury hizo esto? Usaron una técnica de aprendizaje automático para recrear la cara y la voz de Shahrukh Khan Al hacer uso del algoritmo de aprendizaje automático
y los códigos PIN del espectador, se crearon diferentes versiones del mismo anuncio con los nombres de las tiendas locales "Este Diwali, compre en la
tienda cercana Choice of Fashion".
https://interte.com/2022/01/24/how-deepfakes-could-change-the-internet/

set21

Bruce Willis advertises a Russian cellular network operator without appearing in front of the camera. The Hollywood star has licensed his likeness for deepfake videos and images. A series of at least 15 commercials is planned, which together tell a story of two agents. One is played by the Russian comedian Asamat Musaghaliev, the other by his compatriot Konstantin Solowjow. His face cannot be seen – it is replaced by Willis’ face (so-called face-swapping). https://marketresearchtelecast.com/bruce-willis-licenses-himself-for-deepfake-commercials/163276/

set21

The synthetic voice industry sees how nefarious deepfake creators have been able to define the market for realistic video animations of people — and everyone wants to avoid the same fate. For many people familiar with deepfakes, the key market comprises real porn video digitally altered to show celebrities acting out lessons of the birds and the bees. It might have meant the hypothetical face-scanning of deceased mafia-busting U.S. Sen. Robert F. Kennedy into the movie The Godfather, but the legitimate industry was outflanked. https://www.biometricupdate.com/202109/synthetic-voice-industry-wants-as-much-distance-from-deepfakes-as-possible



ag21
Liri can juggle so many jobs, in multiple countries, because she has hired out her face to Hour One, a startup that uses people’s likenesses to create AI-voiced characters that then appear in marketing and educational videos for organizations around the world. It is part of a wave of companies overhauling the way digital content is produced. And it has big implications for the human workforce.  Liri does her waitressing and bar work in person, but she has little idea what her digital clones are up to. “It is definitely a bit strange to think that my face can appear in videos or ads for different companies,” she says. Hour One is not the only company taking deepfake tech mainstream, using it to produce mash-ups of real footage and AI-generated video. Some have used professional actors to add life to deepfaked personas. But Hour One doesn’t ask for any particular skills. You just need to be willing to hand over the rights to your face. https://www.technologyreview.com/2021/08/27/1033879/people-hiring-faces-work-deepfake-ai-marketing-clones/ out21
If you have been on the internet for some time, I’m sure you have come across the term “deepfake” at least once. The AI-based technology, initially used for academic purposes, has gained a fair amount of traction over the past few years and is continuously developing for various use-cases, be it making Elon Musk sing or inserting your face in a famous movie clip. Here comes the weird part though, some companies are buying faces of real individuals to create an army of deepfake clones for the marketing and educational industry. So yeah, you can now sell your face to a company! https://beebom.com/this-company-buys-face-to-create-deepfake-clone/ + 
 An army of deepfake talking heads is coming for your feed, and it wants you. Hour One lets people rent out their faces to help businesses make highly realistic videos. https://www.fastcompany.com/90694393/hour-one-is-building-an-army-of-deepfake-like-talking-heads-maybe-including-you + https://br.financas.yahoo.com/noticias/um-ex%C3%A9rcito-deepfakes-est%C3%A1-chegando-180325657.html 


Agost21

A musicista e compositora americana Holly Herndon parece estar capitalizando o princípio da tecnologia deepfake, permitindo que os fãs usem uma versão digital de si mesma para criar obras de arte originais.

De acordo com um anúncio de quinta-feira de Herndon no Twitter, os usuários que desejam fazer seus próprios deepfakes usando a voz e a imagem exclusivas da compositora terão a oportunidade de vender suas criações cunhadas usando token não fungível, ou NFT, no marketplace Zora. Herndon disse que os fãs podem enviar suas cópias digitais para serem aprovadas pelo DAO do projeto e receberão 50% dos lucros do leilão.

O projeto disse que inicialmente lançaria três “genesis” Holly + NFTs, juntamente com propostas do público, que serão cunhadas usando um contrato inteligente e leiloadas no Zora no próximo mês. Os usuários receberão metade de qualquer lucro, com 40% doado para o DAO e o restante para a própria Herndon. O preço de reserva para dois dos NFTs do gênesis é 15 Ether (ETH) - aproximadamente $ 48.150 no momento da publicação. https://cointelegraph.com.br/news/musician-sells-rights-to-deepfake-her-voice-using-nfts



Maio21
Now There's a Deepfake Audio Platform Where Celebrities Can License AI-Generated Voice Clips
https://gizmodo.com/veritone-launches-deepfake-audio-platform-for-celebriti-1846905864

Sunday, May 16, 2021

Pode a AI combater desinformação (não( provocada pela AI? (VER TECNOLOGIA)

dez23

Se, após a inspecção a olho nu, ainda persistirem dúvidas sobre a proveniência de uma imagem, alguns sites podem provar-se úteis na sua detecção. Estas ferramentas analisam as imagens e determinam, de forma imediata, o grau de probabilidade de terem sido geradas por IA.

O site AI or Not permite introduzir imagens a partir do disco ou a partir da cópia de URL e identifica criações feitas a partir dos geradores Stable Diffusion, MidJourney ou DALL-E. O seu criador, Andrey Doronichev, um russo residente em São Francisco, definiu-o como "uma máquina de raio X de aeroporto para conteúdo digital". AI or Not permite a análise gratuita de 20 imagens em formato JPG ou PNG. Para além de imagem, o site também analisa ficheiros de áudio.

site Is It AI? permite a análise da proveniência de imagem e texto. O site“examina características de imagem, como padrões de cor, formas e texturas, comparando-as com imagens fotográficas reais e imagens geradas por IA” para determinar se foram geradas por IA. A sua utilização é gratuita e ilimitada.

luminarty analisa gratuitamente um número infinito de imagens e texto, aferindo a probabilidade de terem origem em inteligência artificial. Os planos pagos incluem a classificação e localização da origem de imagens, texto e vídeos deepfake.

Estes são apenas três exemplos entre as muitas opções disponíveis online. Outros sites, como o do Content at Scale ou Detecting-AI, também podem ser utilizados com o mesmo intuito.

https://www.publico.pt/2023/12/12/p3/noticia/fotografia-imagem-gerada-ia-aprende-distinguir-2072804?utm_source=notifications&utm_medium=web&utm_campaign=2072804

out23

It’s Nothing but a Deepfake! The Effects of Misinformation and Deepfake Labels Delegitimizing an Authentic Political Speech

Michael Hameleers, Franziska Marquart

Abstract


Mis- and disinformation labels are increasingly weaponized and used as delegitimizing accusations targeted at mainstream media and political opponents. To better understand how such accusations can affect the credibility of real information and policy preferences, we conducted a two-wave panel experiment (Nwave2 = 788) to assess the longer-term effect of delegitimizing labels targeting an authentic video message. We find that exposure to an accusation of misinformation or disinformation lowered the perceived credibility of the video but did not affect policy preferences related to the content of the video. Furthermore, more extreme disinformation accusations were perceived as less credible than milder misinformation labels. The effects lasted over a period of three days and still occurred when there was a delay in the label attribution. These findings indicate that while mis- and disinformation labels might make authentic content less credible, they are themselves not always deemed credible and are less likely to change substantive policy preferences.

https://ijoc.org/index.php/ijoc/article/view/20777

set23

Fight Fire With Fire: Why Recognizing And Mimicking Deepfakes' DNA Is The Way To Win

https://www.forbes.com/sites/forbestechcouncil/2023/09/11/fight-fire-with-fire-why-recognizing-and-mimicking-deepfakes-dna-is-the-way-to-win/?sh=47030fd567c7


mai23

As generative AI developers such as ChatGPTDall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.

Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

AI industry experts speaking at the MIT Technology Review's EmTech Digital conference this week weighed in on how generative AI companies are dealing with a variety of ethical and practical hurdles as even as they push ahead on developing the next generation of the technology
https://www.computerworld.com/article/3695508/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html

maio23

Some viral TikTok videos may soon show a new type of label: that it’s made by AI.

The ByteDance-owned app is developing a tool for content creators to disclose they used generative artificial intelligence in making their videos, according to a person with direct knowledge of the efforts. The move comes as people increasingly turn to AI-generated videos for creative expression, which has sparked copyright battles as well as concerns about misinformation.
https://www.theinformation.com/articles/tiktok-is-developing-ai-generated-video-disclosures-as-deepfakes-rise



jan23

Start-up DuckDuckGoose can spot deepfakes using artificial intelligence


https://innovationorigins.com/en/start-up-duckduckgoose-can-spot-deepfakes-using-artificial-intelligence/

 dez22

A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.

And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”

For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.

Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a non-peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.

The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.

The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.

https://www.biometricupdate.com/202212/a-proposal-for-responsible-deepfakes

nov22
Machine learning’s recent developments have given rise to the phenomenon of “deep fakes,” which is a cause for serious concern. Deepfakes are the phenomenon of creating fake digital items that look like real images, and a string of films has sprung up on social media in the past and coming years. Deepfakes are easily generated by anyone and can be easily distributed on social media owing to a low level of technical expertise. This paper aims to review various techniques that are used for the detection of deepfakes; many of the authors have used data sets of images and videos like FaceFroensics++, thispersondoesnotexist.com, etc. In various studies, some authors have used their own data for analysis. To accurately detect deepfakes, there is a need for smart technology. The study aims to show the existing technologies like convolutional neural networks, recurrent neural networks and support vector machines in the detection of deepfakes. In recent months, machine learning has gained popularity for detecting plausible face swaps in films that leave minimal indications of tampering or deepfake movies. Thus, to combat deepfakes, there is a need for efficient algorithms for detection in an early stage to stop blackmail, political unrest, etc.
https://link.springer.com/chapter/10.1007/978-981-19-5037-7_46 


nov22

Intel claims it has developed an AI model that can detect in real time whether a video is using deepfake technology by looking for subtle changes in color that would be evident if the subject were a live human being.

FakeCatcher is claimed by the chipmaking giant to be capable of returning results in milliseconds and to have a 96 percent accuracy rate.
https://www.theregister.com/2022/11/15/intel_fakecatcher/
https://www.techspot.com/news/96655-intel-detection-tool-uses-blood-flow-identify-deepfakes.html



nov22

"The researchers proposed that anybody planning to upload an image to the internet could run their photo through their program, basically immunizing it to AI image generators. (...) he system he helped develop only takes a few seconds to introduce noise into a photo. Higher resolution images work even better, he said, since they include more pixels that can be minutely disturbed. (..) Salman said he could imagine a future where companies, even the ones who generate the AI models, could certify that uploaded images are immunized against AI models.  (...) The researchers’ program proves that there are ways to defeat deepfakes before they happen. (,.,,) More so, creating these data poisoning systems will create an “arms race” between commercial AI image generators and those trying to prevent deepfakes. “It’s possible, if not likely, that in the future we’ll be able to evade whatever defenses you put on that one particular image,” Kamath said. “And once it’s out there, you can’t take it back.” Of course, there are some AI systems that can detect deepfake videos, and there are ways to train people to detect the small inconsistencies that show a video is being faked. The question is: will there come a time when neither human nor machine can discern if a photo or video has been manipulated? https://gizmodo.com/deepfakes-ai-dall-e-ai-art-generator-1849764276


out22

Algoritmo detecta imagens e vídeos alterados com inteligência artificial

https://revistapesquisa.fapesp.br/deepfakes-o-novo-estagio-tecnologico-das-noticias-falsas/


out22

Segundo o professor, o problema está nos algoritmos que estas tecnologias usam, porque privilegiam as audiências e os cliques.

"O jornalismo pode estar perante um precipício profissional. Os conteúdos falsos tendem a difundir-se mais, têm mais visibilidade e por isso temos de estar muito atentos quando usamos este tipo de tecnologia", afirmou.

É neste contexto que a inteligência artificial (Artificial Intelligence - AI) pode ajudar na luta contra a desinformação, ao tentar encontrar os padrões e aplicá-los depois em situações diferentes.

"A AI é um observador incansável, aprende e aplica o conhecimento adquirido em situações futuras", explicou a este propósito, no debate, Juan Gomez Romero, especialista em inteligência artificial da Universidade de Granada.

A sua utilização pode ser determinante para identificar e combater os boatos e rumores, na medida em que há uma procura constante de novos formatos e públicos para os difundir, nomeadamente entre os jovens.

Para o jornalista Pablo Martinez, do `site` espanhol Maldita.es [um `site` de verificação de factos] também presente na mesa redonda, o desafio atual é precisamente o chamado entretenimento, porque os "desinformadores" usam cada vez mais estas formas para criar narrativas ocultas a que os jovens são permeáveis.

https://www.rtp.pt/noticias/economia/jornalismo-pode-estar-perante-um-precipicio-profissional-alerta-especialista_n1440973


out22

A recent study found that ordinary human observers and leading computer vision deepfake detection AI models are similarly accurate but make different types of mistakes. People who had access to machine model predictions were more accurate, suggesting that AI-assisted collaborative decision-making could be useful but will be unlikely to be foolproof.

Researchers found that when AI makes wrong predictions and humans have access to those models' predictions, humans end up revising their answers incorrectly. This suggests that machine predictions can affect human decision-making–an important factor when designing systems of human-AI collaboration.

The problem of falsified media existed long before these AI tools. Like any technological advance, people find both positive and negative applications. AI has created exciting new possibilities with applications in creative and filmmaking industries and, at the same time, raises the need for reliable detection, protection of privacy rights, and risk management against harmful use cases.

Current research suggests that humans and machine models are imperfect at detecting AI-altered videos. One answer may be a collaborative approach between AI and human detection in order to address the shortcomings of each. Since it is unlikely for any detection model to be foolproof, education about deepfake technology can help us become more aware that seeing is not always believing—a reality that was true long before the arrival of deepfake AI tools.

https://www.psychologytoday.com/us/blog/urban-survival/202210/are-humans-or-ai-better-detecting-deepfakes-videos


set22

Deepfakes detected via reverse modeling of the vocal tract are ‘comically’ non-human


https://www.biometricupdate.com/202209/deepfakes-detected-via-reverse-modeling-of-the-vocal-tract-are-comically-non-human

ago22

Using Deep Learning to Detecting Deepfakes

(na pasta)

jul22

Researchers from Samsung Labs have developed a way to create high-resolution avatars, or deepfakes, from a single still frame photo or even a painting.

https://petapixel.com/2022/07/22/megaportraits-high-res-deepfakes-created-from-a-single-photo/




jun22

Deepfake Detection through Deep Learning

jun22
http://www.dspace.dtu.ac.in:8080/jspui/handle/repository/19172


mai22

Advanced Machine Learning Techniques to Detect Various Types of Deepfakes

https://dl.acm.org/doi/abs/10.1145/3494109.3527196


mai22
Limits and Possibilities for “Ethical AI” in Open Source:
A Study of Deepfakes
ABSTRACT
Open source software communities are a significant site of AI development,
but “Ethical AI” discourses largely focus on the problems
that arise in software produced by private companies. Design, policy
and tooling interventions to encourage “Ethical AI” based on
studies in private companies risk being ill-suited for an open source
context, which operates under radically different organizational
structures, cultural norms, and incentives.
In this paper, we show that significant and understudied harms
and possibilities originate from differing practices of transparency
and accountability in the open source community. We conducted
an interview study of an AI-enabled open source Deepfake project
to understand how members of that community reason about the
ethics of their work. We found that notions of the “Freedom 0” to
use code without any restriction, alongside beliefs about technology
neutrality and technological inevitability, were central to how
community members framed their responsibilities, and the actions
they believed were and were not available to them. We propose
a continuum between harms resulting from how a system is implemented
versus how it is used, and show how commitments to
radical transparency in open source allow great ethical scrutiny for
harms wrought by implementation bugs, but allow harms through
(mis)use to proliferate, requiring a deeper toolbox for disincentivizing
harmful use. We discuss how an assumption of control over
downstream uses is often implicit in discourses of “Ethical AI”, but
outline alternative possibilities for action in cases such as open
source where this assumption may not hold.

mai22
There are calls for three types of defensive response: regulation, technical controls, and improved digital or media literacy. Each is problematic by itself. This article asks what kind of literacy can address deepfake harms, proposing an artificial intelligence (AI) and data literacy framework to explore the potential for social learning with deepfakes and identify sites and methods for intervening in their cultures of production. 
https://journals.sagepub.com/doi/abs/10.1177/14614448221093943

maio22

New method detects deepfake videos with up to 99% accuracy.
Two-pronged technique detects manipulated facial expressions and identity swaps. Computer scientists at UC Riverside can detect manipulated facial expressions in deepfake videos with higher accuracy than current state-of-the-art methods. The method also works as well as current methods in cases where the facial identity, but not the expression, has been swapped, leading to a generalized approach to detect any kind of facial manipulation. The achievement brings researchers a step closer to developing automated tools for detecting manipulated videos that contain propaganda or misinformation.
https://news.ucr.edu/articles/2022/05/03/new-method-detects-deepfake-videos-99-accuracy

ab22

Na última sexta-feira (29), o pesquisador e cientista Wang Weimin, de Singapura, recebeu um prêmio pelo primeiro lugar conquistado em um desafio de reconhecimento de deepfakes, ao desenvolver um modelo de inteligência artificial (IA) poderoso. A precisão do desenvolvimento de Weimin, que venceu outras 469 equipes de todo o mundo no evento com duração de cinco meses, foi de 98,53%.

Os desafios do Trusted Media Challenge, organizado pelo AI Singapore (um escritório do programa de IA da National Research Foundation), consistiam em detectar deepfakes, ou videoclipes alterados digitalmente. Dentre eles, conteúdos que apresentavam rostos, vozes ou ambos manipulados.

https://olhardigital.com.br/2022/04/30/seguranca/modelo-de-inteligencia-artifical-premiado-em-singapura-reconhece-deepfakes-com-985-de-precisao/


abr22
An Efficient Deepfake Video Detection Approach with Combination of EfficientNet and Xception Models Using Deep Learning
https://ieeexplore.ieee.org/abstract/document/9743542



fev22

Deep Learning is an effective technique and used in various fields of natural language processing, computer vision, image processing and machine vision. Deep fakes uses deep learning technique to synthesis and manipulate image of a person in which human beings cannot distinguish the fake one. By using generative adversarial neural networks (GAN) deep fakes are generated which may threaten the public. Detecting deep fake image content plays a vital role. Many research works have been done in detection of deep fakes in image manipulation. The main issues in the existing techniques are inaccurate, consumption time is high. In this work we implement detecting of deep fake face image analysis using deep learning technique of fisherface using Local Binary Pattern Histogram (FF-LBPH). Fisherface algorithm is used to recognize the face by reduction of the dimension in the face space using LBPH. Then apply DBN with RBM for deep fake detection classifier
https://peerj.com/articles/cs-881/

nov21


One promising approach involves tracking a video’s provenance, “a record of everything that happened from the point that the light hit the camera to when it shows up on your display,” explained James Tompkin, a visual computing researcher at Brown.
But problems persist. “You need to secure all the parts along the chain to maintain provenance, and you also need buy-in,” Tompkin said. “We’re already in a situation where this isn’t the standard, or even required, on any media distribution system.”
And beyond simply ignoring provenance standards, wily adversaries could manipulate the provenance systems, which are themselves vulnerable to cyberattacks. “If you can break the security, you can fake the provenance,” Tompkin said. “And there’s never been a security system in the world that’s never been broken into at some point.”

Given these issues, a single silver bullet for deepfakes appears unlikely. Instead, each strategy at our disposal must be just one of a “toolbelt of techniques we can apply,” Tompkin said. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/

set21

DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning. 

DeepFakes: Detecting Forged and Synthetic Media Content Using Machine Learning

S Zobaed, MF Rabby, MI Hossain, E Hossain, S Hasan… - arXiv preprint arXiv …, 2021

set21

AI can detect a deepfake face because its pupils have jagged edges. Creating a fake persona online with a computer-generated face is easier than ever before, but there is a simple way to catch these phony pictures – look at the eyes. The inability of artificial intelligence to draw circular pupils gives away whether or not a face comes from a real photograph. Generative adversarial networks (GANs) – a type of AI that can generate images from a simple prompt – can produce realistic-looking faces. Because they are made through a process of continual …

Read more: https://www.newscientist.com/article/2289815-ai-can-detect-a-deepfake-face-because-its-pupils-have-jagged-edges/#ixzz764WXBLuc

 mai21

From QAnon conspiracy theories to Russian government sponsored election interference, social

media disinformation campaigns are a part of online life, and identifying these threats amid the

posts that billions of social media users upload each day is a challenge. To help sort through

massive amounts of data, social media platforms are developing AI systems to automatically

remove harmful content primarily through text-based analysis. But these techniques won’t identify

all the disinformation on social media. After all, much of what people post are photos, videos, audio

recordings, and memes. Developing the entirely new AI systems necessary to detect such multimedia

disinformation will be difficult. 

Meme warfare: AI countermeasures to disinformation should focus on popular,

not perfect, fakes

Michael Yankoski , Walter Scheirer and Tim Weninger

BULLETIN OF THE ATOMIC SCIENTISTS

2021, VOL. 77, NO. 3, 119–123

https://doi.org/10.1080/00963402.2021.1912093