nov23 (sempree existiu manipulaºão e nada de mal veio aomundo, alarmistas)
And yet the guest has not arrived. Sensity conceded in 2021 that deepfakes had had no “tangible impact” on the 2020 Presidential election. It found no instance of “bad actors” spreading disinformation with deepfakes anywhere. Two years later, it’s easy to find videos that demonstrate the terrifying possibilities of A.I. It’s just hard to point to a convincing deepfake that has misled people in any consequential way.
The computer scientist Walter J. Scheirer has worked in media forensics for years. He understands more than most how these new technologies could set off a society-wide epistemic meltdown, yet he sees no signs that they are doing so. Doctored videos online delight, taunt, jolt, menace, arouse, and amuse, but they rarely deceive. As Scheirer argues in his new book, “A History of Fake Things on the Internet” (Stanford), the situation just isn’t as bad as it looks.
There is something bold, perhaps reckless, in preaching serenity from the volcano’s edge. But, as Scheirer points out, the doctored-evidence problem isn’t new. Our oldest forms of recording—storytelling, writing, and painting—are laughably easy to hack. We’ve had to find ways to trust them nonetheless.
It’s possible to take comfort from the long history of photographic manipulation, in an “It was ever thus” way. Today’s alarm pullers, however, insist that things are about to get worse. With A.I., a twenty-first-century Hoxha would not stop at awkwardly scrubbing individuals from the records; he could order up a documented reality à la carte. We haven’t yet seen a truly effective deployment of a malicious deepfake deception, but that bomb could go off at any moment, perhaps detonated by Israel’s war with Hamas. When it does, we’ll be thrown through the looking glass and lose the ability to trust our own eyes—and maybe to trust anything at all.
The alarmists warn that we’re at a technological tipping point, where the artificial is no longer distinguishable from the authentic. They’re surely right in a narrow sense—some deepfakes really are that good. But are they right in a broader one? Are we actually deceived? Even if that Gal Gadot video had been seamless, few would have concluded that the star of the “Wonder Woman” franchise had paused her lucrative career to appear in low-budget incest porn. Online, such videos typically don’t hide what they are; you find them by searching specifically for deepfakes.
And it is why doctored evidence rarely sways elections. We are, collectively, good at sussing out fakes, and politicians who deal in them often face repercussions. Likely the most successful photo manipulation in U.S. political history occurred in 1950, when, the weekend before an election, the Red-baiter Joseph McCarthy distributed hundreds of thousands of mailers featuring an image of his fellow-senator Millard Tydings talking with the Communist Earl Browder. (It was actually two photographs pasted together.) Tydings lost his reëlection bid, yet the photograph helped prompt an investigation that ultimately led, in 1954, to McCarthy becoming one of only nine U.S. senators ever to be formally censured.
(...) may not matter. Hundreds of hours of highly explicit footage have done little to change our opinions of the celebrities targeted by deepfakes. Yet a mere string of words, a libel that satisfies an urge to sexually humiliate politically ambitious women, has stuck in people’s heads through time: Catherine the Great had sex with a horse.
If by “deepfakes” we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren’t deep, and the deeps aren’t fake. In worrying about deepfakes’ potential to supercharge political lies and to unleash the infocalypse, moreover, we appear to be miscategorizing them. A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones.
https://www.newyorker.com/magazine/2023/11/20/a-history-of-fake-things-on-the-internet-walter-j-scheirer-book-review
mai23
Os advogados de Musk contestaram as alegações dos autores da ação com uma nova estratégia, que está preocupando não só os juízes, mas toda a comunidade jurídica dos Estados Unidos: a já chamada "defesa deepfake" (deepfake defense).
Esse é um novo tipo de defesa que consiste em alegar que um vídeo (ou um áudio) real é falsificado, porque teria sido adulterado com a tecnologia de deepfake, que é viabilizada com o uso de inteligência artificial (IA).
"Musk, como qualquer figura pública, está sujeito a muitos deepfakes, tanto de vídeos como de gravações de áudio, com o propósito de mostrar que ele disse ou fez coisas que ele nunca disse ou fez", diz a petição dos advogados de Musk.
Eles alegaram que, graças aos avanços em inteligência artificial, está mais fácil do que nunca criar imagens e vídeos de coisas que não existem ou de eventos que nunca aconteceram. A falsificação digital está sendo usada para espalhar desinformação e propaganda, personificar celebridades e políticos, manipular eleições e aplicar golpes, disseram eles.
Esses argumentos não convenceram a juíza. Ao contrário, eles são profundamente preocupantes, ela escreveu.
"A posição é de que, pelo fato de Musk ser famoso e ser um alvo de deepfakes, suas declarações públicas são imunes. Em outras palavras, Musk e outros em sua posição podem simplesmente dizer o que quiserem no domínio público, depois evitar responsabilidade com o argumento de que foram vítimas de deepfake. Esta corte não quer estabelecer um precedente por tolerar essa abordagem da Tesla".
Esse não é o primeiro caso. No julgamento de dois invasores do Congresso em 6 de janeiro de 2021, entre os quais o do réu considerado o "líder insurrecionista", Guy Reffitt, seus advogados alegaram que os vídeos usados na investigação podem ter sido criados ou manipulados por inteligência artificial.
Ética profissional
Hany Farid, especialista em perícia forense digital e professor da Universidade da Califórnia, concorda que há um novo fenômeno preocupante: "À medida que esse tipo de tecnologia se tornar mais prevalente, ficará fácil alegar que qualquer coisa é falsificada".
https://www.conjur.com.br/2023-mai-13/ia-cria-problema-justica-eua-defesa-deepfake
No comments:
Post a Comment