Friday, January 27, 2023

Em defesa dos DF

set24

Policymakers and societies are grappling with the question of how to respond to deepfakes, i.e., synthetic audio-visual media which is proliferating in all areas of digital life– from politics to pornography. However, debates and research on deepfakes’ impact and governance largely neglect the technology’s sources, namely the developers of the underlying artificial intelligence (AI), and those who provide code or deepfake creation services to others, making the technology widely accessible. These actors include open-source developers, professionals working in large technology companies and specialized start-ups, and for deepfake apps. They can profoundly impact which underlying AI technologies are developed, whether and how they are made public, and what kind of deepfakes can be created. Therefore, this paper explores which values guide professional deepfake development, how economic and academic pressures and incentives influence developers’ (perception of) agency and ethical views, and how these views do and could impact deepfake design, creation, and dissemination. Thereby, the paper focuses on values derived from debates on AI ethics and on deepfakes’ impact. It is based on ten qualitative in-depth expert interviews with academic and commercial deepfake developers and ethics representatives of synthetic media companies. The paper contributes to a more nuanced understanding of AI ethics in relation to audio-visual generative AI. Besides, it empirically informs and enriches the deepfake governance debate by incorporating developers’ voices and highlighting governance measures which directly address deepfake developers and providers and emphasize the potential of ethics to curb the dangers of deepfakes

https://link.springer.com/article/10.1007/s43681-024-00542-2


dez23

Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines


jul23

Los Angeles-based video editor and political satirist Justin T. Brown has found himself at the center of a contentious debate thanks to his AI-generated images that portray prominent politicians such as Donald Trump, Barack Obama, and Joe Biden engaged in fictional infidelities.

His provocative series, christened “AI will revolutionize the blackmail industry,” quickly came under fire, leading to Brown’s banishment from the Midjourney AI art platform, which he used to generate the pictures.

Brown said the images were envisioned as a stark warning about the potential misuse of artificial intelligence.

https://finance.yahoo.com/news/political-satirist-slammed-creating-deepfakes-120103536.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ

 jan23

If I’m right, then the perhaps unsurprising moral of this story is that, just like forged paintings, or cosmetic surgery, or Andy Warhol’s wig, deepfakes only really “work” where their status as fake is at least somewhat hidden — whether because it was mentioned only once to viewers and then half-forgotten about, or because it was never mentioned at all in the first place. What’s perhaps more surprising is that this seems true even where the intent is mainly to get viewers to imagine something. If the viewer is fully conscious that an image is faked, she will be less likely to believe it; but she will also be unlikely even just to suspend her disbelief in the way that imaginative immersion in a dramatic re-enactment requires. When it comes to deepfakes in documentaries, then, unless you can find a way to use them cleverly, it seems to me you should possibly save your money altogether. For some creative purposes, it’s pointless to keep reminding people they are in Fake Barn Country.
https://unherd.com/2023/01/in-defence-of-deepfakes/

No comments:

Post a Comment