https://journals.sagepub.com/doi/abs/10.1177/14614448221093943
ab22
Dove deepfakes moms to illustrate the impact of toxic influencers on teens
The brand’s newest short film explores the influence and impact of beauty advice on social media.
fev22
fev22
Now, the MIT Center for Advanced Virtuality (MIT Virtuality for short) has created a course that addresses misinformation both in terms of specific contemporary technological phenomena and a broader media perspective.
“We are currently experiencing an information crisis,” says Joshua Glick, education producer for this MIT Virtuality project and an assistant professor of media studies at Hendrix College. “A combination of political, technological, and economic forces has propelled the spread of misinformation and disinformation throughout our media environment — and the crisis has only been amplified by the pandemic.”
The MIT Center for Advanced Virtuality, part of MIT Open Learning and directed by Computer Science and Artificial Intelligence Laboratory professor D. Fox Harrell, has created a free online course, Media Literacy in the Age of Deepfakes, with the goal of giving educators and independent learners the resources and critical skills to understand the threat of misinformation. In addition to teaching participants how to decipher fact-based assertions from lies and credible sources from hoaxes, the course aims to place deepfakes within a larger history of media manipulation and to show how activists, artists, technologists, and filmmakers are using AI-enabled media for a wide range of civic projects. https://news.mit.edu/2022/fostering-media-literacy-age-deepfakes-0217
jan22
What are the societal implications if people can no longer reliably distinguish between fake and real content on the internet? Research has shown that deepfakes can actively change people’s emotions, attitudes, and behavior (Vaccari & Chadwick, 2020). As a new tool of misinformation, deepfakes can thereby undermine democracy (Aral, 2020). It begs the question: how can we increase people’s deepfake detection accuracy to avoid falling for such fakes? A previous blog post on Psychology Today suggested that raising awareness of the importance of detecting deepfakes and spurring more critical thinking might help. The same study showing overconfidence in detection abilities also put this intervention to the test. The results suggest that raising people’s awareness about the negative consequences of deepfakes does not increase participants’ detection accuracy compared to a control group whose awareness was not raised. This suggests that being fooled by deepfakes may not be a matter of motivation so much as ability. https://www.psychologytoday.com/gb/blog/decisions-in-context/202112/the-psychology-deepfakes
The UAE National Programme for Artificial Intelligence has launched a deepfake guide that will help the public understand the harmful as well as helpful uses of this emerging technology. Deepfake is the use of various techniques, like artificial intelligence, to create fake audio and video clips to make it look genuine and convincing, with the main aim to deceive. This technology, now becoming rampant, has been found in some incidents of cyberbullying. The guide was launched as part of the initiatives of UAE Council for Digital Wellbeing, chaired by Lt-Gen Sheikh Saif bin Zayed Al Nahyan, Deputy Prime Minister and Minister of Interior. The new deepfake guide has categorised fake content into two main categories: shallow and deep. https://www.khaleejtimes.com/technology/uae-launches-deepfake-guide + https://ai.gov.ae/
Letter: Finland shows that education is best tool to fight ‘deepfakes’ From Nick Nigam, As Carly Minsky states in “Deepfake’ videos: to believe or not believe?” (Special Report, FT.com, January 26), it doesn’t matter that the majority of deepfake videos can be easily debunked. The “liar’s dividend” means that those setting out to spread disinformation stand to benefit even when their efforts are disproved, resulting in a gradual erosion of trust in traditional sources of authority. To mitigate the potential that deepfakes have to cause harm to society, simply developing detection tools is not enough. In fact, education may well be our most effective and readily available tool when it comes to countering the destructive effects of deepfakes and other disinformation. The greater public awareness is of the technology and its uses, the more they will be able to think critically about the media they consume and apply caution where needed. Finland offers an excellent illustration of this. In 2016, after seeing the damage done by fake news in Russia, the country instituted information literacy and strong critical thinking as a core component of its national curriculum. Consequently, Finland topped the Open Society Institute’s Media Literacy Index last year — a list of European countries deemed the most resilient to disinformation. In addition to education, governments have a key role to play in the development of new regulatory frameworks that address the threat of content manipulated by artificial intelligence. While the UK still has some way to go, in the US the National Defense Authorization Act of 2020 was the country’s first law with provisions related to “machine manipulated media” and requires the director of national intelligence to submit an unclassified report to Congress on the foreign weaponisation of deepfakes. It’s important to remember that the malicious use of emerging technologies is nothing new. The history of the printing press, camera, and digital tools are evidence of that. However, if we are to harness the positive power of artificial intelligence, governments and the tech sector will need to join forces to battle the immediate threat artificial intelligence-generated media poses to society. Nick Nigam Principal, Samsung Next Berlin, Germany
Finally, all the experts agree that the public needs greater media literacy. “There is a difference between proving that a real thing is real and actually having the general public believe that the real thing is real,” Ovadya says. He says people need to be aware that falsifying content and casting doubt on the veracity of content are both tactics that can be used to intentionally sow confusion.
https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/
No comments:
Post a Comment