set24
Policymakers and societies are grappling with the question of how to respond to deepfakes, i.e., synthetic audio-visual media which is proliferating in all areas of digital life– from politics to pornography. However, debates and research on deepfakes’ impact and governance largely neglect the technology’s sources, namely the developers of the underlying artificial intelligence (AI), and those who provide code or deepfake creation services to others, making the technology widely accessible. These actors include open-source developers, professionals working in large technology companies and specialized start-ups, and for deepfake apps. They can profoundly impact which underlying AI technologies are developed, whether and how they are made public, and what kind of deepfakes can be created. Therefore, this paper explores which values guide professional deepfake development, how economic and academic pressures and incentives influence developers’ (perception of) agency and ethical views, and how these views do and could impact deepfake design, creation, and dissemination. Thereby, the paper focuses on values derived from debates on AI ethics and on deepfakes’ impact. It is based on ten qualitative in-depth expert interviews with academic and commercial deepfake developers and ethics representatives of synthetic media companies. The paper contributes to a more nuanced understanding of AI ethics in relation to audio-visual generative AI. Besides, it empirically informs and enriches the deepfake governance debate by incorporating developers’ voices and highlighting governance measures which directly address deepfake developers and providers and emphasize the potential of ethics to curb the dangers of deepfakes
https://link.springer.com/article/10.1007/s43681-024-00542-2
dez23
Not all digital alteration is harmful, though. Part of my work involves identifying how emerging technologies can foster positive change. For instance, with appropriate disclosure, synthetic media could be used to enhance voter education and engagement. Generative AI could help create informative content about candidates and their platforms, or of wider election processes, in different languages and formats, improving inclusivity or reducing barriers for underdog or outsider candidates. For voters with disabilities, synthetic media could provide accessible formats of election materials, such as sign language avatars or audio descriptions of written content. Satirical deepfakes could engage people who might otherwise be disinterested in politics, bringing attention to issues that might not be covered in mainstream media. We need to celebrate and protect these uses.
https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines
jul23
Los Angeles-based video editor and political satirist Justin T. Brown has found himself at the center of a contentious debate thanks to his AI-generated images that portray prominent politicians such as Donald Trump, Barack Obama, and Joe Biden engaged in fictional infidelities.
His provocative series, christened “AI will revolutionize the blackmail industry,” quickly came under fire, leading to Brown’s banishment from the Midjourney AI art platform, which he used to generate the pictures.
Brown said the images were envisioned as a stark warning about the potential misuse of artificial intelligence.
https://finance.yahoo.com/news/political-satirist-slammed-creating-deepfakes-120103536.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ
jan23
No comments:
Post a Comment