Friday, May 26, 2023

(AI) Deepfakes are biggest AI concern, says Microsoft president (OUTROS RECEIOS)

fev24

AI deepfakes are cheap, easy, and coming for the 2024 election 

Generative AI makes fake audio, images, and video easier to create than ever before. Are policymakers and platforms ready?

https://www.theverge.com/2024/2/29/24085663/ai-deepfakes-misinformation-policy-free-speech-first-amendment-decoder-podcast


fev24

AI makes deepfake pornography more accessible, as Canadian laws play catch-up

B.C. recently became latest province to pass laws allowing people to take down explicit content of them online

https://www.cbc.ca/news/canada/british-columbia/deepfake-pornography-to-the-masses-1.7104326


dez23


Over 24 million people visit websites that let them use AI to undress women in photos: Study

Graphika, a social network analysis company, revealed that a whopping 24 million people visited these undressing websites in September alone, highlighting a troubling surge in non-consensual pornography driven by advancements in artificial intelligence. Here are the details.

https://www.indiatoday.in/technology/news/story/over-24-million-people-visit-websites-that-let-them-use-ai-to-undress-women-in-photos-study-2473662-2023-12-08


agos23

During the recent AI boom, the creation of nonconsensual pornographic deepfakes has surged, with the number of videos increasing ninefold since 2019, according to research from independent analyst Genevieve Oh. Nearly 150,000 videos, which have received 3.8 billion views in total, appeared across 30 sites in May 2023,  according to Oh’s analysis. Some of the sites offer libraries of deepfake programming, featuring the faces of celebrities like Emma Watson or Taylor Swift grafted onto the bodies of porn performers.  Others offer paying clients the opportunity to “nudify” women they know, such as classmates or colleagues. 

https://webcache.googleusercontent.com/search?q=cache:Sx4YDmgSB1IJ:https://www.bnnbloomberg.ca/google-and-microsoft-are-supercharging-ai-deepfake-porn-1.1962912&sca_esv=559711199&hl=pt-PT&gl=pt&strip=1&vwsrc=0


jul23

Microsoft founder Bill Gates thinks powerful new artificial intelligence could program deepfakes and misinformation so severe that it disrupts political processes worldwide.

"Deepfakes and misinformation generated by AI could undermine elections and democracy," Gates said in a new post on his blog Tuesday. "On a bigger scale, AI-generated deepfakes could be used to try to tilt an election. Of course, it doesn’t take sophisticated technology to sow doubt about the legitimate winner of an election, but AI will make it easier."

The post marks an intensification of Gates's tone around AI compared to his more hopeful view of it in a March blog entry.

https://finance.yahoo.com/news/bill-gates-ai-could-undermine-elections-and-democracy-170438696.html


jun23

The United Nations has called artificial intelligence-generated media a “serious and urgent” threat to information integrity, particularly on social media.

In a June 12 report, the U.N. claimed the risk of disinformation online has “intensified” due to “rapid advancements in technology, such as generative artificial intelligence” and singled out deepfakes in particular.

The U.N. said false information and hate speech generated by AI is “convincingly presented to users as fact.” Last month, the S&P 500 briefly dipped due to an AI-generated image and faked news report of an explosion near the Pentagon.

https://cointelegraph.com/news/un-serious-concerns-about-ai-deepfakes


 Mai23

In Washington speech, Brad Smith calls for steps to ensure people know when a photo or video is generated by AI

Brad Smith, the president of Microsoft, has said that his biggest concern around artificial intelligence was deepfakes, realistic looking but false content.

In a speech in Washington aimed at addressing the issue of how best to regulate AI, which went from wonky to widespread with the arrival of OpenAI’s ChatGPT, Smith called for steps to ensure that people know when a photo or video is real and when it is generated by AI, potentially for nefarious purposes.

https://www.theguardian.com/technology/2023/may/25/deepfakes-ai-concern-microsoft-brad-smith

Sunday, May 14, 2023

Deepfake defense

nov23 (sempree existiu manipulaºão e nada de mal veio aomundo, alarmistas)

And yet the guest has not arrived. Sensity conceded in 2021 that deepfakes had had no “tangible impact” on the 2020 Presidential election. It found no instance of “bad actors” spreading disinformation with deepfakes anywhere. Two years later, it’s easy to find videos that demonstrate the terrifying possibilities of A.I. It’s just hard to point to a convincing deepfake that has misled people in any consequential way.

The computer scientist Walter J. Scheirer has worked in media forensics for years. He understands more than most how these new technologies could set off a society-wide epistemic meltdown, yet he sees no signs that they are doing so. Doctored videos online delight, taunt, jolt, menace, arouse, and amuse, but they rarely deceive. As Scheirer argues in his new book, “A History of Fake Things on the Internet” (Stanford), the situation just isn’t as bad as it looks.

There is something bold, perhaps reckless, in preaching serenity from the volcano’s edge. But, as Scheirer points out, the doctored-evidence problem isn’t new. Our oldest forms of recording—storytelling, writing, and painting—are laughably easy to hack. We’ve had to find ways to trust them nonetheless.

It’s possible to take comfort from the long history of photographic manipulation, in an “It was ever thus” way. Today’s alarm pullers, however, insist that things are about to get worse. With A.I., a twenty-first-century Hoxha would not stop at awkwardly scrubbing individuals from the records; he could order up a documented reality à la carte. We haven’t yet seen a truly effective deployment of a malicious deepfake deception, but that bomb could go off at any moment, perhaps detonated by Israel’s war with Hamas. When it does, we’ll be thrown through the looking glass and lose the ability to trust our own eyes—and maybe to trust anything at all.

The alarmists warn that we’re at a technological tipping point, where the artificial is no longer distinguishable from the authentic. They’re surely right in a narrow sense—some deepfakes really are that good. But are they right in a broader one? Are we actually deceived? Even if that Gal Gadot video had been seamless, few would have concluded that the star of the “Wonder Woman” franchise had paused her lucrative career to appear in low-budget incest porn. Online, such videos typically don’t hide what they are; you find them by searching specifically for deepfakes.

And it is why doctored evidence rarely sways elections. We are, collectively, good at sussing out fakes, and politicians who deal in them often face repercussions. Likely the most successful photo manipulation in U.S. political history occurred in 1950, when, the weekend before an election, the Red-baiter Joseph McCarthy distributed hundreds of thousands of mailers featuring an image of his fellow-senator Millard Tydings talking with the Communist Earl Browder. (It was actually two photographs pasted together.) Tydings lost his reëlection bid, yet the photograph helped prompt an investigation that ultimately led, in 1954, to McCarthy becoming one of only nine U.S. senators ever to be formally censured.

(...) may not matter. Hundreds of hours of highly explicit footage have done little to change our opinions of the celebrities targeted by deepfakes. Yet a mere string of words, a libel that satisfies an urge to sexually humiliate politically ambitious women, has stuck in people’s heads through time: Catherine the Great had sex with a horse.

If by “deepfakes” we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren’t deep, and the deeps aren’t fake. In worrying about deepfakes’ potential to supercharge political lies and to unleash the infocalypse, moreover, we appear to be miscategorizing them. A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.

https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review


 mai23


Os advogados de Musk contestaram as alegações dos autores da ação com uma nova estratégia, que está preocupando não só os juízes, mas toda a comunidade jurídica dos Estados Unidos: a já chamada "defesa deepfake" (deepfake defense).

Esse é um novo tipo de defesa que consiste em alegar que um vídeo (ou um áudio) real é falsificado, porque teria sido adulterado com a tecnologia de deepfake, que é viabilizada com o uso de inteligência artificial (IA).

"Musk, como qualquer figura pública, está sujeito a muitos deepfakes, tanto de vídeos como de gravações de áudio, com o propósito de mostrar que ele disse ou fez coisas que ele nunca disse ou fez", diz a petição dos advogados de Musk.

Eles alegaram que, graças aos avanços em inteligência artificial, está mais fácil do que nunca criar imagens e vídeos de coisas que não existem ou de eventos que nunca aconteceram. A falsificação digital está sendo usada para espalhar desinformação e propaganda, personificar celebridades e políticos, manipular eleições e aplicar golpes, disseram eles.

Esses argumentos não convenceram a juíza. Ao contrário, eles são profundamente preocupantes, ela escreveu.

"A posição é de que, pelo fato de Musk ser famoso e ser um alvo de deepfakes, suas declarações públicas são imunes. Em outras palavras, Musk e outros em sua posição podem simplesmente dizer o que quiserem no domínio público, depois evitar responsabilidade com o argumento de que foram vítimas de deepfake. Esta corte não quer estabelecer um precedente por tolerar essa abordagem da Tesla".

Esse não é o primeiro caso. No julgamento de dois invasores do Congresso em 6 de janeiro de 2021, entre os quais o do réu considerado o "líder insurrecionista", Guy Reffitt, seus advogados alegaram que os vídeos usados na investigação podem ter sido criados ou manipulados por inteligência artificial. 

Ética profissional
Hany Farid, especialista em perícia forense digital e professor da Universidade da Califórnia, concorda que há um novo fenômeno preocupante: "À medida que esse tipo de tecnologia se tornar mais prevalente, ficará fácil alegar que qualquer coisa é falsificada".


https://www.conjur.com.br/2023-mai-13/ia-cria-problema-justica-eua-defesa-deepfake