Friday, March 20, 2020

Combater com literacia

mai22
There are calls for three types of defensive response: regulation, technical controls, and improved digital or media literacy. Each is problematic by itself. This article asks what kind of literacy can address deepfake harms, proposing an artificial intelligence (AI) and data literacy framework to explore the potential for social learning with deepfakes and identify sites and methods for intervening in their cultures of production. 

https://journals.sagepub.com/doi/abs/10.1177/14614448221093943


ab22

Dove deepfakes moms to illustrate the impact of toxic influencers on teens

The brand’s newest short film explores the influence and impact of beauty advice on social media.

https://www.fastcompany.com/90746539/dove-deepfakes-moms-to-illustrate-the-impact-of-toxic-influencers-on-teens



fev22



 https://politica.estadao.com.br/noticias/geral,deepfake-lula-moro-dilma-doria-fakenews-eleicoes-2022,70003991259


fev22

Now, the MIT Center for Advanced Virtuality (MIT Virtuality for short) has created a course that addresses misinformation both in terms of specific contemporary technological phenomena and a broader media perspective.

“We are currently experiencing an information crisis,” says Joshua Glick, education producer for this MIT Virtuality project and an assistant professor of media studies at Hendrix College. “A combination of political, technological, and economic forces has propelled the spread of misinformation and disinformation throughout our media environment — and the crisis has only been amplified by the pandemic.”

The MIT Center for Advanced Virtuality, part of MIT Open Learning and directed by Computer Science and Artificial Intelligence Laboratory professor D. Fox Harrell, has created a free online course, Media Literacy in the Age of Deepfakes, with the goal of giving educators and independent learners the resources and critical skills to understand the threat of misinformation. In addition to teaching participants how to decipher fact-based assertions from lies and credible sources from hoaxes, the course aims to place deepfakes within a larger history of media manipulation and to show how activists, artists, technologists, and filmmakers are using AI-enabled media for a wide range of civic projects. https://news.mit.edu/2022/fostering-media-literacy-age-deepfakes-0217


jan22

Movie director James Cameron says he hopes critical thinking will help people identify deepfake videos. So-called deepfakes use machine learning to modify video footage, usually replacing one person’s face with another, with realistic results. https://timesnewsexpress.com/news/tech/director-james-cameron-on-the-dangers-of-deepfakes/

dez21

What are the societal implications if people can no longer reliably distinguish between fake and real content on the internet? Research has shown that deepfakes can actively change people’s emotions, attitudes, and behavior (Vaccari & Chadwick, 2020). As a new tool of misinformation, deepfakes can thereby undermine democracy (Aral, 2020). It begs the question: how can we increase people’s deepfake detection accuracy to avoid falling for such fakes? A previous blog post on Psychology Today suggested that raising awareness of the importance of detecting deepfakes and spurring more critical thinking might help. The same study showing overconfidence in detection abilities also put this intervention to the test. The results suggest that raising people’s awareness about the negative consequences of deepfakes does not increase participants’ detection accuracy compared to a control group whose awareness was not raised. This suggests that being fooled by deepfakes may not be a matter of motivation so much as ability. https://www.psychologytoday.com/gb/blog/decisions-in-context/202112/the-psychology-deepfakes


jul21

The UAE National Programme for Artificial Intelligence has launched a deepfake guide that will help the public understand the harmful as well as helpful uses of this emerging technology. Deepfake is the use of various techniques, like artificial intelligence, to create fake audio and video clips to make it look genuine and convincing, with the main aim to deceive. This technology, now becoming rampant, has been found in some incidents of cyberbullying. The guide was launched as part of the initiatives of UAE Council for Digital Wellbeing, chaired by Lt-Gen Sheikh Saif bin Zayed Al Nahyan, Deputy Prime Minister and Minister of Interior. The new deepfake guide has categorised fake content into two main categories: shallow and deep. https://www.khaleejtimes.com/technology/uae-launches-deepfake-guide + https://ai.gov.ae/ 


Fev21

Letter: Finland shows that education is best tool to fight ‘deepfakes’ From Nick Nigam,  As Carly Minsky states in “Deepfake’ videos: to believe or not believe?” (Special Report, FT.com, January 26), it doesn’t matter that the majority of deepfake videos can be easily debunked. The “liar’s dividend” means that those setting out to spread disinformation stand to benefit even when their efforts are disproved, resulting in a gradual erosion of trust in traditional sources of authority. To mitigate the potential that deepfakes have to cause harm to society, simply developing detection tools is not enough. In fact, education may well be our most effective and readily available tool when it comes to countering the destructive effects of deepfakes and other disinformation. The greater public awareness is of the technology and its uses, the more they will be able to think critically about the media they consume and apply caution where needed. Finland offers an excellent illustration of this. In 2016, after seeing the damage done by fake news in Russia, the country instituted information literacy and strong critical thinking as a core component of its national curriculum. Consequently, Finland topped the Open Society Institute’s Media Literacy Index last year — a list of European countries deemed the most resilient to disinformation. In addition to education, governments have a key role to play in the development of new regulatory frameworks that address the threat of content manipulated by artificial intelligence. While the UK still has some way to go, in the US the National Defense Authorization Act of 2020 was the country’s first law with provisions related to “machine manipulated media” and requires the director of national intelligence to submit an unclassified report to Congress on the foreign weaponisation of deepfakes. It’s important to remember that the malicious use of emerging technologies is nothing new. The history of the printing press, camera, and digital tools are evidence of that. However, if we are to harness the positive power of artificial intelligence, governments and the tech sector will need to join forces to battle the immediate threat artificial intelligence-generated media poses to society. Nick Nigam Principal, Samsung Next Berlin, Germany 
https://www.ft.com/content/3b51ce79-28b2-4931-81cd-9daffa4dcb16 

jan21
“The Department of Defense can’t save us. Technology won’t save us,” writes Cole. “Being more critically-thinking humans might save us, but that’s a system that’s lot harder to debug than an AI algorithm.” https://www.trtworld.com/magazine/deepfakes-and-cheap-fakes-the-biggest-threat-is-not-what-you-think-43046


'''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''aulaTCM
jan21
The makers of the 3 minute 45 second video claimed the fake was actually a warning. “Deepfake technology is the frightening new frontier in the battle between misinformation and truth,” Channel 4 Director of Programmes Ian Katz said in a statement. https://thefederal.com/news/channel-4-digitally-created-a-fake-of-the-queen/

dez20

 Finally, all the experts agree that the public needs greater media literacy. “There is a difference between proving that a real thing is real and actually having the general public believe that the real thing is real,” Ovadya says. He says people need to be aware that falsifying content and casting doubt on the veracity of content are both tactics that can be used to intentionally sow confusion.

https://www.technologyreview.com/2019/10/10/132667/the-biggest-threat-of-deepfakes-isnt-the-deepfakes-themselves/


nov20
The nexus of the deepfake conversation today centers on a need for global media literacy. The speed and traction with which community-generated deepfake videos spread online has shifted perceptions of truth. User discernment on what constitutes as fake is a critical factor in determining collective trust towards digital media. However, within the conversation about literacy, there is a tendency to overlook regional subjectivities. These regional contexts depend upon the socio-political implications of the producers and users, which intrinsically rests upon existing power dynamics.
https://immerse.news/prepare-dont-panic-for-deepfakes-c77f9f683f30


our20
It’s very important to educate people on the powerful capabilities of AI algorithms. This can help people to be aware of such evolving technologies like deepfakes that can prevent, at least to some extent, the bad uses of applications like FakeApp having widespread impact. https://www.analyticsinsight.net/best-ways-prevent-deepfakes/



jul20
"aunque se han creado softwares  de detección de este tipo de fakes, aún no son de acceso libre y gratuito. La única manera de reconocer el deepfakees a través de una educación moral digital, que pueda detectar una serie de parámetros visuales hiperrealistas: frecuencia del parpadeo, efecto intermitente de las caras o transiciones entre  cabeza y cuello." Alfabetización moral digital para la detección de deepfakesy fakes audiovisuales

jul20
s history records, the first lunar landing was a total success and the crew returned to Earth safely, despite a new recording showing Nixon reading the contingency words prepared for him by speechwriter William Safire on July 18, 1969. The video, released by MIT's Center for Advanced Virtuality on Monday — the 51st anniversary of the Apollo 11 moon landing — is "fake news," purposely. "Media misinformation is a longstanding phenomenon, but, exacerbated by deepfake technologies and the ease of disseminating content online, it's become a crucial issue of our time," said D. Fox Harrell, professor of digital media and of artificial intelligence at MIT and director of the Center for Advanced Virtuality, part of MIT Open Learning, in a statement. http://www.collectspace.com/news/news-072020a-moon-disaster-speech-mit-deepfake.html



21/6
Reuters, the world’s largest multimedia news provider, today announced the expansion of its award-winning e-learning course on helping newsrooms around the world spot deepfakes and manipulated media in 12 additional languages. https://www.reuters.com/article/rpb-fbdeepfakecourselanguages/reuters-expands-deepfake-course-to-16-languages-in-partnership-with-facebook-journalism-project-idUSKBN23M1QY

In Argentina, fact-checkers supported a ‘deepfake’

For fact-checkers in Argentina, the enemy is the electoral misinformation. On Oct. 27, the country will elect a new president. And, tired of fighting all types of falsehoods, fact-checkers and advertisers teamed up with some TV channels to start airing a striking ‘deepfake’ video. The collaboration, which airs this week, is a first. On Monday, a manipulated one-minute-long piece signed by Fit BBDO played on many open TV channels in Argentina showing the candidates who are running for Casa Rosada in unexpected situations. LINK
14/11/2019 Cada um de nós: users have to do their part to prevent fake information from being spread. People should have "emotional skepticism" about what they see, she said on "CBS This Morning" Tuesday, and be careful not to repost content just because it incites an emotional response. LINK
Lawmakers warn about threat of political deepfakes by creating one. LINK

25/11/2019 German Police to Use Deepfake Child Porn to Fight Sex Crimes Against Kids https://sputniknews.com/europe/201911221077379673-german-police-deepfake-child-porn-sex-crimes/
QUESTÃO SOCIAL Deepfakes should also be understood as reflecting a social problem rather than just a technical one, according to experts. “Deepfakes are most likely to work in places with polarisation and hyper-partisan communities because it will give a community that really wants to espouse a certain point of view the proof they are looking for,” says Ben Nimmo, head of investigations at Graphika, a social media analytics company. (LINK

3/2/20 ONLINE MANIPULATION: INFORMED DIGITAL CITIZENS ARE THE BEST DEFENSE AGAINST DEEPFAKES Countries must educate and equip their citizens. Educators also face real challenges in helping youth develop eagle eyes for deepfakes. If young people lack confidence in finding and evaluating reliable public information, their motivation for participating in or relying on our democratic structures will be increasingly at risk. LINK

No comments:

Post a Comment