Friday, January 27, 2023

TEXTO

 (procurar outras referencias antes de jan23)

jan23

Thanks to a free web app called calligrapher.ai, anyone can simulate handwriting with a neural network that runs in a browser via JavaScript. After typing a sentence, the site renders it as handwriting in nine different styles, each of which is adjustable with properties such as speed, legibility, and stroke width. It also allows downloading the resulting faux handwriting sample in an SVG vector file.

The demo is particularly interesting because it doesn't use a font. Typefaces that look like handwriting have been around for over 80 years, but each letter comes out as a duplicate no matter how many times you use it.

During the past decade, computer scientists have relaxed those restrictions by discovering new ways to simulate the dynamic variety of human handwriting using neural networks.

Created by machine-learning researcher Sean Vasquez, the Calligrapher.ai website utilizes research from a 2013 paper by DeepMind's Alex Graves. Vasquez originally created the Calligrapher site years ago, but it recently gained more attention with a rediscovery on Hacker News.

https://arstechnica.com/information-technology/2023/01/computer-generated-handwriting-demo-offers-deepfakes-for-scrawl/

Em defesa dos DF

dez23

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines


jul23

Los Angeles-based video editor and political satirist Justin T. Brown has found himself at the center of a contentious debate thanks to his AI-generated images that portray prominent politicians such as Donald Trump, Barack Obama, and Joe Biden engaged in fictional infidelities.

His provocative series, christened “AI will revolutionize the blackmail industry,” quickly came under fire, leading to Brown’s banishment from the Midjourney AI art platform, which he used to generate the pictures.

Brown said the images were envisioned as a stark warning about the potential misuse of artificial intelligence.

https://finance.yahoo.com/news/political-satirist-slammed-creating-deepfakes-120103536.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ

 jan23

If I’m right, then the perhaps unsurprising moral of this story is that, just like forged paintings, or cosmetic surgery, or Andy Warhol’s wig, deepfakes only really “work” where their status as fake is at least somewhat hidden — whether because it was mentioned only once to viewers and then half-forgotten about, or because it was never mentioned at all in the first place. What’s perhaps more surprising is that this seems true even where the intent is mainly to get viewers to imagine something. If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires. When it comes to deepfakes in documentaries, then, unless you can find a way to use them cleverly, it seems to me you should possibly save your money altogether. For some creative purposes, it’s pointless to keep reminding people they are in Fake Barn Country.
https://unherd.com/2023/01/in-defence-of-deepfakes/

Wednesday, January 25, 2023

DF em ficção

Jul23 De-aging

The most recent Indiana Jones movie shows actor Harrison Ford de-aged by 40-plus years. The movie makers used artificial intelligence to comb through all of the decades-old footage of the actor and create a younger Ford.

https://www.thomsonreuters.com/en-us/posts/technology/practice-innovations-deepfakes/

Fev 23

Fauda (Netflix, 4ª temporada, 4º episodio)

A irmã de Omar, Maya (Lucy Ayoub), é uma policial condecorada em Israel e casada com outro oficial, que é judeu. À medida que a situação com Omar começa a se desenrolar, Raphael, vice-diretor do Mossad, assume o comando do Shin Bet e recruta Doron para uma missão especial. Eles enviam a Maya vídeos deepfake de Omar pedindo sua ajuda, que eventualmente decide cruzar a fronteira com o Líbano para entregar dois passaportes para Omar e sua esposa. Doron a conhece disfarçado de um traficante chamado Salim. 

https://mixdeseries.com.br/fauda-4a-temporada-final-explicado-quem-morre-e-tudo-sobre/

jan23

Technology to change people’s faces using artificial intelligence Deep fake will officially enter the programming of TV in England. A basic feature of a comedy show Deep fake neighborhood wars of the ITV channel, described as “the world’s first long-form narrative program using deepfake technology”. Press release.

The project was produced by Tiger Aspect Productions, along with StudioNeural, which works on the use of Deepfake. The finale of the first episode, which will be screened next Thursday (26), features spoof versions of artists such as Tom Holland, Beyoncé, Idris Elba, Ariana Grande and Nicki Minaj

https://www.mediarunsearch.co.uk/uk-hosts-worlds-first-official-deepfake-tv-show-technology/

The show’s script will feature impersonations of artists such as Tom Holland, Beyoncé, Ariana Grande and Nicki Minaj. Apart from them, the team was completed by Rihanna, Adele, Kim Kardashian, Jay Z, Olivia Colman, Stormzy and footballer Harry Kane.

According to Spencer Jones, the program’s creator, use of Deepfake is still very limited and somewhat dangerous. For example, actors cannot hide their faces or express deep expressions because these movements are not captured by the technology. Barney Francis, president of Studio Neural, also commented on the matter:

https://www.mediarunsearch.co.uk/uk-airs-first-tv-show-featuring-deepfakes/


Sunday, January 22, 2023

Vitimas s(homens)

 jan23

Robert Pattinson has had enough of seeing “bizarre” deepfakes of himself on social media.

Speaking with ES Magazine, the “Batman” actor opened up about feeling uneasy over the digitally altered videos of his face that have been popping up across the internet.

“It’s terrifying,” the actor told the U.K. outlet in a story published Thursday. “The amount of people who know me quite well and will still be like, ‘Why are you doing these weird dancing videos on TikTok?’ It’s really bizarre. You just realise that we’re two years away from it being indistinguishable from reality — and what on Earth am I going to do as a job then?”

https://www.huffpost.com/entry/robert-pattinson-deepfakes-social-media_n_63c9e9c1e4b04d4d18ddc4b7


Friday, January 13, 2023

Audio

apr24

Don’t play it by ear:
Audio deepfakes in a
year of global elections

From robocalls to voice clones, generative AI is allowing

malicious actors to spread misinformation with ease.

Artificial intelligence company OpenAI recently introduced Voice Engine, a natural-sounding speech generator that uses text and a 15-second audio sample to create an “emotive and realistic” imitation of the original speaker.

OpenAI has not yet released Voice Engine to the public, citing concerns over the potential abuse of its generative artificial intelligence (AI) – specifically to produce audio deepfakes – which could contribute to misinformation, especially during elections.

Audio deepfakes and their uses

Audio deepfakes are generated using deep learning techniques in which large datasets of audio samples are used for AI models to learn the characteristics of human speech to produce realistic audio. Audio deepfakes can be generated in two ways: text-to-speech (text is converted to audio) and speech-to-speech (an uploaded voice recording is synthesised as the targeted voice).

Anyone can generate an audio deepfake. They are easier and cheaper to make than video deepfakes and simpler to disseminate on social media and messaging platforms.

Audio deepfakes have been used in cyber-enabled financial scams where fraudsters impersonate bank customers to authorise transactions. The same technology is increasingly being used to propagate disinformation. Several audio deepfakes attempting to mimic the voices of politicians have circulated on social media. In 2023, artificially generated audio clips of UK Labour leader Keir Starmer allegedly featured him berating party staffers.  While fact-checkers determined the audio was fake, it surpassed 1.5 million hits on X (formerly Twitter).

In India, voice cloning of children has been used to deceive parents into transferring money. In Singapore, deepfake videos containing voice clones of politicians such as the prime minister and deputy prime minister have been used in cyber-scams.

https://www.lowyinstitute.org/the-interpreter/don-t-play-it-ear-audio-deepfakes-year-global-elections


mar24

How the music industry is battling AI deepfakes one state at a time with the ELVIS Act

In an in-depth interview, Recording Academy advocacy and public policy chief officer Todd Dupler explains how the ELVIS Act could combat the misuse of a person’s voice, image and likeness using AI.
https://cointelegraph.com/news/how-music-industry-battling-ai-deepfakes


nov23
Recent advances in generative artificial intelligence have spurred developments in realistic speech synthesis. While this technology has the potential to improve lives through personalized voice assistants and accessibility-enhancing communication tools, it also has led to the emergence of deepfakes, in which synthesized speech can be misused to deceive humans and machines for nefarious purposes.
https://source.wustl.edu/2023/11/defending-your-voice-against-deepfakes/

set23

Unraveling the Deepfake Deception

In a startling revelation, Dr. Marco Vinicio Boza, a leading specialist in Intensive Care, disclosed that deceitful individuals employed artificial intelligence to craft videos imitating his likeness and voice. Their objective? To spread misleading messages and engage in fraudulent activities endangering public health.

The Deception Deepens

Recent weeks have seen the circulation of these counterfeit videos, expertly edited using artificial intelligence, purporting Dr. Boza’s endorsement of a product touted to dissolve blood clots. But the doctor isn’t staying silent. He denounced the video, explaining its misuse by swindlers selling units of the bogus medicine for ¢50,000 in the country’s northern region.
https://www.costaricantimes.com/beware-the-deepfakes-renowned-doctors-voice-mimicked-in-costa-rica-to-peddle-fake-medicines/74804


set23

A disseminação de conteúdos produzidos com técnicas de aprendizado de máquina atingiu proporções alarmantes e  são extremamente convincentes. Este momento exige  tecnologias capazes de identificá-los, como é o caso da biometria de voz.

Os deepfakes de voz ou de face são possíveis por meio da utilização de algoritmos que sintetizam ou alteram elementos em imagens e vídeos existentes, substituindo rostos, vozes e até mesmo criando cenas completamente fictícias.~
https://startupi.com.br/deepfake-biometria-de-voz/


jun23

Meta has another new AI model on the docket, and this one seems perfectly engineered for the land of tomorrow if that utopian future is filled with nothing but deepfakes and modified audio. Like AI image generators, Voicebox generates synthetic voices based on a simple text prompt from scratch—or, in actuality—sound from thousands of audiobooks.

On Friday, Meta announced its new Voicebox AI that can create voice clips using simple text prompts. In a video, CEO Mark Zuckerberg shared on his Facebook and Instagram, he said the Voicebox AI model can take a text prompt and read it in a variety of human, though somewhat digital-sounding, voices. Otherwise, Voicebox can also modify audio to remove unwanted noises from voice clips, like a dog barking in the background. Unlike many other AI voice synthesization models, Meta’s AI can create audio in languages other than English, including French, Spanish, German, Polish, and Portuguese, and the company said the AI can effectively translate any passage from one language to another, while keeping the same voice style.
https://gizmodo.com/meta-help-people-craft-more-deepfakes-with-voicebox-a-1850548158


mai23

Music Deepfakes: Are AI imitations a creative art or a new kind of appropriation?

Drake, Grimes, The Weeknd and Holly Herndon are all part of the AI vocalist boom – and they all seem to have different perspectives.

https://musictech.com/features/music-deepfakes-ai-drake-grimes-weeknd/


ab23

What Is Deepfake Music? And How Is It Created?

https://www.makeuseof.com/what-is-deepfake-ai-music/


Deepfake music mimics the style of a particular artist, including their voice. How is it possible for it to sound so real?
ab23
Voice deepfakes use AI algorithms to create audio clips that sound like a specific person, even if that person never spoke the words. This technology can be used to create fake audio recordings of public figures or even to manipulate audio evidence in legal proceedings. However, the question remains: are these AI-generated voice deepfakes any good?
https://www.techcityng.com/ai-generated-voice-deepfakes-are-comical-but-are-they-any-good/

ab23
  • Songs made with generative AI are infiltrating streaming services.
  • One Spotify user was recommended the same song under 49 different names and suspects AI is behind it.
  • While the songs sound exactly the same, each has a different title, artist, and art.
https://www.businessinsider.com/ai-mystery-spotify-song-49-different-titles-artists-art-music-2023-4

ab23

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

< CLONING A VOICE Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.
https://businessmirror.com.ph/2023/04/12/voice-deepfakes-are-calling-heres-what-they-are-and-how-to-avoid-getting-scammed/



Abr23

 

Lil Durk has made it clear that all Artificial Intelligence deepfakes which are attempting to use his voice will never replace him as a warm-blooded, able-bodied superstar artist.

Speaking to HipHopDX about future technologies and his new NFT-centered phygital sneaker collection, NXTG3NZ, the OTF honcho said that although AI is going to change how people make music, it won’t replace humans.

“I heard them AI deep fakes usin’ my voice, it’s wild what tech be doin’,” Lil Durk told DX. “I think AI gon’ change how we make music, but ain’t nothin’ gonna replace the real deal, them raw vibes and emotions we bring. Just gotta make sure we use it right, ya know? Keep our essence alive.”

https://hiphopdx.com/news/lil-durk-ai-deepfake-wont-replace-him


 fev23

The audio streaming service said Wednesday in a press release that U.S. and Canadian users with premium subscriptions would first start getting access to the DJ that day. In the beginning, it will be "in beta" and in English, according to Spotify.

https://www.foxbusiness.com/technology/spotify-releasing-artificial-intelligence-dj-two-countries

fev23

Now music streaming giant Spotify is throwing its hat in the ring, with a new feature powered by its own personalization tech, as well as by voice and generative AI.

The company is launching a ‘DJ’ feature, which it says is like an “AI DJ in your pocket” and adds that it serves as “a personalized AI guide that knows you and your music taste so well that it can choose what to play for you”.

This feature is first rolling out in beta, and Spotify says it will deliver a curated playlist of music alongside commentary around the tracks and artists it thinks you will like.
https://www.musicbusinessworldwide.com/spotify-just-launched-a-personalized-dj-powered-by-generative-and-voice-ai/

Jan23

The emergence in the last week of a particularly effective voice synthesis machine learning model called VALL-E has prompted a new wave of concern over the possibility of deepfake voices made quick and easy — quickfakes, if you will. But VALL-E is more iterative than breakthrough, and the capabilities aren’t so new as you might think. Whether that means you should be more or less worried is up to you.

Voice replication has been a subject of intense research for years, and the results have been good enough to power plenty of startups, like WellSaidPapercup and Respeecher. The latter is even being used to create authorized voice reproductions of actors like James Earl Jones. Yes: from now on Darth Vader will be AI generated.

VALL-E, posted on GitHub by its creators at Microsoft last week, is a “neural codec language model” that uses a different approach to rendering voices than many before it. Its larger training corpus and some new methods allow it to create “high-quality personalized speech” using just three seconds of audio from a target speaker.

That is to say, all you need is an extremely short clip like the following (all clips from Microsoft’s paper):

https://techcrunch.com/2023/01/12/vall-es-quickie-voice-deepfakes-should-worry-you-if-you-werent-worried-already/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAMtFk2rYqaHs0Jt-QKqu9XKuO02KMGK86YJyhYeIRpIWscAlg4iWtv4hyjwevR6K99qmkuFQOUQJ79ti-dVhWHsFTdNPlVimRqTni1a8uGJKxFT4mXKhZgrlfL2IZVpqycXa-J_mWEF7VDpRhJBJgwy-D4x-MMaBFlvsLvFxJSQn