A once-feared army general, who ruled Indonesia with an iron fist for more than three decades, has a message for voters ahead of upcoming elections – from beyond the grave.
“I am Suharto, the second president of Indonesia,” the former general says in a three-minute video that has racked up more than 4.7 million views on X and spread to TikTok, Facebook and YouTube.
While mildly convincing at first, it’s clear that the stern-looking man in the video isn’t the former Indonesian president. The real Suharto, dubbed the “Smiling General” because he was always seen smiling despite his ruthless leadership style, died in 2008 at age 86.
The video was an AI-generated deepfake, created using tools that cloned Suharto’s face and voice. “The video was made to remind us how important our votes are in the upcoming election,” said Erwin Aksa, deputy chairman of Golkar – one of Indonesia’s largest and oldest political parties. He first shared the video on X ahead of February 14 elections.~
https://edition.cnn.com/2024/02/12/asia/suharto-deepfake-ai-scam-indonesia-election-hnk-intl/index.html
Ethical Problems of the Use of Deepfakes in the Arts and Culture
- Chapter
- First Online:
Part of the The International Library of Ethics, Law and Technology book series (ELTE,volume 41)
Abstract
The Routledge Handbook of Philosophy and Media Ethics
A atriz pornô brasileira Elisa Sanches, 42, fez um testamento e registrou em cartório seu desejo de proibir que sua imagem seja usada em vídeos de deepfake após sua morte.
jul23 MORTOS
Elis Regina aparece cantando ao lado da filha Maria Rita em campanhafeita com inteligência artificial
Death is no longer the end of an actor’s career, Tom Hanks theorized during a recent conversation on “The Adam Buxton Podcast” (via BBC). Why? Artificial intelligence and deepfakes, the actor said. Both technologies will be on full display at the Cannes Film Festival thanks to the world premiere of “Indiana Jones and the Dial of Destiny,” which used AI to help de-age Harrison Ford so that he resembles his appearance from 1981’s “Raiders of the Lost Ark.”
“The first time we did a movie that had a huge amount of our own data locked in a computer — literally what we looked like — was a movie called ‘The Polar Express,'” Hanks said, referring to Robert Zemeckis’ 2004 Christmas movie. “We saw this coming, we saw that there was going to be this ability to take zeros and ones from inside a computer and turn it into a face and a character. That has only grown a billion-fold since then and we see it everywhere.”
https://variety.com/2023/film/news/tom-hanks-act-death-ai-deepfakes-1235614391/abr23
To investigate this, the team turned their attention to two common and potentially lethal social problems – drunk driving and domestic violence, real-life issues often targeted by social policy changes and activism efforts. Lu and Chu wanted to see how public service announcements that showed deepfakes of deceased victims of drunk driving and domestic violence would impact viewers as they narrated the stories of their deaths.
“The prosocial deepfakes investigated in the current study take the form of deepfake resurrection”, the authors explained in the paper, “which features a dead victim who is brought back to life with deepfakes that enable this victim to advocate for an issue related to the cause of their death.”
https://www.iflscience.com/dying-to-tell-you-deepfake-resurrections-to-promote-public-good-explored-by-researchers-68592
ab23
AI and Human Rights: A Critical Ethico-Legal Overview
Vikrant Sopan Yadav*
Abstract: The capabilities of artificial intelligence (AI) in ensuring human rights are tremendous. However, it may also have some denting effect on human rights. The use of AI-based surveillance, face detection, etc. has proved racially discriminatory, resulting in grave human rights violations. AI experts have also admitted to the possibility of developers' bias resulting in biased AI inventions. This research article is an attempt to analyze the possible adverse impact of AI technology on the protection of human rights. The author has done an analytical overview of practical instances of AI-related human rights violations in the recent past. An empirical analysis comprising of an observation tool was employed to observe and analyze the expert opinion expressed during a conference. Based on doctrinal and empirical analysis, the author has made some recommendations, such as including technology-related human rights in national and international human rights statutes, to strike a balance between human rights and AI innovation, with the ultimate goal of protecting human rights.
(em arquivo)
ab23
Ariana Grande canta Anitta? Inteligência artificial cria deepfakes musicais e viraliza nas redes
mar23
MORTOS
Let the dead talk: How deepfake resurrection narratives influence audience response in prosocial contexts
dez22
A small academic and corporate team of researchers say they have created a way to preserve the biometric privacy of people whose faces are posted on social media.
And while that innovation is worthy of examination, so is a couple phrases that the team has developed for their facial anonymization: “a responsible use for deepfakes by design” and “My Face, My Choice.”
For most people, deepfakes exist because humans like to be fooled. For the rest, they exist to dominate a future when objective proof or truth no longer exist.
Two scientists from State University of New York, Binghamton, and another from Intel Labs say in a non-peer-reviewed paper that they recognize the identity and privacy dangers posed by face image scrapers like Clearview AI that harvest billions of faces for their own purposes and without permission.
The answer, they say, is qualitatively dissimilar deepfakes. That is, using deepfake algorithms to alter faces just enough that the faces cannot be facially recognized by software. The result is a facial image in a group photo that is true enough to the original (and free of AI weirdness) that anyone familiar with a person would quickly accept it as representative.
The researchers also have proposed metrics for doing this under which a deepfake (though, again, still recognizable by many humans) is randomly generated with a guaranteed dissimilarity.
https://www.biometricupdate.com/202212/a-proposal-for-responsible-deepfakes
dez22
At a glance, the above three images look just like me. Look closer and you might notice that my skin is too smooth, my clothes distorted in places — details that might be dismissed as an overdone Photoshop edit.
Thing is, I never posed for these pictures. Nor have I ever sported shoulder-length hair or a cowboy hat, for that matter. These images are entirely the product of artificial intelligence, utilizing a cutting-edge technology developed by Google scientists called DreamBooth.
Since its release in late August, DreamBooth has already advanced the field of AI art by leaps and bounds. In a nutshell, it gives AI the ability to study what individuals or objects look like, then synthesizes a “photorealistic” image of the subject in a completely new context.
ETHICAL AND SOCIAL RESPONSIBILITY FOR USING DEEPFAKES
something they haven't seen, tasted, or tried on. The recent invention and improvement of
deepfake technology can change communication between companies and consumers. But
before considering all the possibilities of using the latest marketing techniques, we should
define the meaning of the word―deepfake‖. According to Merriam-Webster Dictionary, a
This hints at a far wider debate in the world of deepfakes – that of ethics. If it’s now possible to create a fake but lifelike version of anyone else using technology, aren’t there huge implications for fraud, identify and intellectual property theft and all kinds of exploitation, from financial to sexual? Graham says that the technology comes with responsibility and that Metaphysic has a “big focus” on ethics and – in particular – consent. The rule, he says, must be that individuals own and control their own hyperreal likeness, meaning they also own and control the data involved in training and creating that likeness.
Deepfakes’ of Celebrities Have Begun Appearing in Ads, With or Without Their Permission
Digital simulations of Elon Musk, Tom Cruise, Leo DiCaprio and others have shown up in ads, as the image-melding technology grows more popular and presents the marketing industry with new legal and ethical questions
“Stunning and creepy…I gotta learn this”: Exploring Concepts of Information Ethics in Deepfake Creation Tutorials on YouTube
MORTOS
Uma empresa de inteligência artificial usou técnicas de deepfake para permitir que "Steve Jobs" (ou uma versão digital dele) concedesse uma entrevista a um podcast. O objetivo da iniciativa foi revelar como é possível criar sons tão realísticos quanto as fotos e vídeos criados com as ferramentas modernas.
O podcast foi criado pela Play.ht, empresa especializada na criação de ferramentas de geração de texto para voz com IA. O material fictício traz o falecido fundador da Apple, Steve Jobs, sendo entrevistado pelo polêmico influenciador Joe Rogan — que teve110 episódios do podcast The Joe Rogan Experience excluídos recentemente, acusado de disseminar informações negacionistas sobre a pandemia de covid-19.
A voz de Jobs ainda parece meio estranha em alguns momentos, mas a IA consegue recriar as nuances da fala, o timbre e até o jeito de falar do ex-CEO da Maçã. A fala de Joe Rogan é mais realista, mas há muito mais materiais disponíveis atualmente do que de uma pessoa falecida há 11 anos.
Roteiro também foi criado por IA
https://br.noticias.yahoo.com/deepfake-permite-que-steve-jobs-211200693.html
Out22
CINEMA
Jean-Luc Godard once claimed, regarding cinema, “When I die, it will be the end.” Godard passed away last month; film perseveres. Yet artificial intelligence has raised a kindred specter: that humans may go obsolete long before their artistic mediums do. Novels scribed by GPT-3; art conjured by DALL·E—machines could be making art long after people are gone. Actors are not exempt. As deepfakes evolve, fears are mounting that future films, TV shows, and commercials may not need them at all.
A new open source AI image generator capable of producing realistic pictures from any text prompt has seen stunningly swift uptake in its first week. Stability AI’s Stable Diffusion, high fidelity but capable of being run on off-the-shelf consumer hardware, is now in use by art generator services like Artbreeder, Pixelz.ai and more. But the model’s unfiltered nature means not all the use has been completely above board.
For the most part, the use cases have been above board. For example, NovelAI has been experimenting with Stable Diffusion to produce art that can accompany the AI-generated stories created by users on its platform. Midjourney has launched a beta that taps Stable Diffusion for greater photorealism.
os solicitantes fraudulentos de trabajos tecnológicos no son nada nuevo. En una publicación de LinkedIn de noviembre de 2020, un reclutador escribió que algunos candidatos contratan ayuda externa para apoyarlos durante las entrevistas en tiempo real y que la tendencia parece haber empeorado durante la pandemia.
Por ejemplo, varios headhunters descubrieron que en Corea del Norte, los estafadores se hacían pasar por entrevistados estadounidenses para las nuevas empresas de criptografía y Web3. A esto se le suma el uso de tecnología deepfake impulsada por Inteligencia Artificial (IA).
https://www.expoknews.com/es-etico-el-uso-de-deepfakes-para-solicitar-empleo/
Amazon executives say they want to give their Alexa voice assistant the ability to mimic any voice it is trained on for less than a minute.
The company is hitting hard on emotions, according to reporting by Reuters. A senior vice president is quoted saying Amazon was inspired to write the code because “so many of us have lost someone we love” to the pandemic.
Too much? The company that now wants to “make the memories last” showed promotional video at Amazon’s re:Mars conference of a child who says, “Alexa, can grandma finish reading me the Wizard of Oz?”
Sadly, no video evidence of that heart-warmer could be found online, maybe because more than one news site covering the conference found the development “creepy.”
https://www.biometricupdate.com/202206/grammy-said-i-could-stay-up-late-amazon-hints-at-deepfake-voices-as-family-bonding + https://tech.hindustantimes.com/tech/news/creepy-morbid-monstrosity-amazon-alexa-creeps-out-internet-with-voices-of-the-dead-71656065304512.html
jul23
Los Angeles-based video editor and political satirist Justin T. Brown has found himself at the center of a contentious debate thanks to his AI-generated images that portray prominent politicians such as Donald Trump, Barack Obama, and Joe Biden engaged in fictional infidelities.
His provocative series, christened “AI will revolutionize the blackmail industry,” quickly came under fire, leading to Brown’s banishment from the Midjourney AI art platform, which he used to generate the pictures.
Brown said the images were envisioned as a stark warning about the potential misuse of artificial intelligence.
https://finance.yahoo.com/news/political-satirist-slammed-creating-deepfakes-120103536.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ
MORTOS
Thai creative agency Rabbit’s Tale has created a touching new campaign film for Five Star Chicken restaurant chain in Thailand called “Quality time, again” which highlights a memorable meal that people have had with their now-departed loved ones.
The campaign includes a five-minute film featuring a woman remembering her mother, who passed away following a difficult battle with cancer in 2010.
https://www.brandinginasia.com/campaign-uses-deepfake-technology-to-allow-daughter-to-have-another-meal-with-deceased-mom/
The unethical use of deepfakes
Audrey de Rancourt-Raymond, Nadia Smaili
Journal of Financial Crime
ISSN: 1359-0790
Article publication date: 19 May 2022
The purpose of this study is to discuss the harmful use of deepfakes in an organizational context, based on the only two cases the authors found that were addressed by the media from the perspective of corporate fraud. This study offers an overview of deepfake technology, and in particular, examines five W questions to better decipher the impact of these tools on organizations: What is deepfake? Who is the fraudster and who is targeted? Why use them and how? And What after? Based on these five W questions, this study provides an in-depth discussion of the two cases identified. Even though this technology has several advantages, this study examines its dark side.
No comercial, o ator Eugenio Derbez (‘No Ritmo do Coração‘) assiste a TV e é surpreendido pelo anúncio com Chaves: “Eugenio, continua me assistindo como quando era um menino?”, pergunta o Chavinho. “Chaves? Claro, você tem me acompanhado por toda a vida”, responde Derbez.
De acordo com a Tilt, responsável pela produção, para produzir o vídeo não foi utilizado nenhum material gravado anteriormente por Chespirito, como Bolaños também era conhecido. A tecnologia permite modificar características faciais de forma realista, mesmo em movimento, com uma aplicação tridimensional em vídeos e fotos.
https://cinepop.com.br/chaves-reaparece-em-novo-comercial-de-streaming-atraves-de-deepfake-isso-isso-isso-341679/
Face swapping technology poses moral and ethical dilemmas
Deepfakes Can Help Families Mourn—or Exploit Their Grief.
Death holograms aren't inherently creepy. They're part of a lineage of grief technologies that stretches back to photography.
https://www.wired.com/story/deepfake-death-grief-hologram-photography-film/
fev22
Can a deepfake company be ethical?
C’est ce que propose de faire en quelques clics DeepPrivacy. Ce site permet de trafiquer rapidement une photo en lui appliquant les techniques d’IA utilisées par le deepfake. Le but n'est pas de faire ressembler un individu à quelqu’un d’autre, mais plutôt à personne d’autre… avec parfois quelques raté.
But Miles Fisher, best known for being the Tom Cruise deepfake guy, insists the tech behind it all is “morally neutral.”In a new interview with Today, Fisher—whose viral TikTok clips are made possible by visual effects specialist Chris Umé—reflected on the continued popularity of the @deeptomcruise account. At the time of this writing, the account had amassed more than 3.3 million followers, with the most recent clip arriving earlier this week. “I think the technology is morally neutral,” Fisher told Today when asked about the “potential threat” of deepfakes like his own around the 3:06 mark in the video up top. “As it develops, the positive output will so far outweigh the negative, nefarious uses.” Moving forward, per Fisher, the aim (including with the Umé-launched Metaphysic tech company) is to figure out how “identity rights” might work in the future. Using Cruise as an example, Fisher proposed a scenario in which “everyone” could receive compensation. “Let’s say Tom Cruise gave us the consent for this likeness, where we could move beyond just small parody clips,” he said. “Everybody gets paid for that intellectual property.” https://www.complex.com/pop-culture/tom-cruise-deepfake-actor-insists-controversial-tech-is-morally-neutral
LOS ANGELES - Andy Chanley, the afternoon drive host at Southern California's public radio station 88.5 KCSN, has been a radio DJ for over 32 years. And now, thanks to artificial intelligence technology, his voice will live on simultaneously in many places. "I may be a robot, but I still love to rock," says the robot DJ named Andy, derived from Artificial Neural Disk-JockeY, in Chanley's voice, during a demonstration for Reuters where the voice was hard to distinguish from a human DJ. https://www.asiaone.com/digital/radio-dj-you-hear-might-already-be-robot
But this capability comes with risk. One obvious danger is the creation of fake historical episodes. Imagined, mythologized and fake events can precipitate wars: a storied 14th-century defeat in the Battle of Kosovo still inflames Serbian anti-Muslim sentiments, even though nobody knows if the Serbian coalition actually lost that battle to the Ottomans. https://theconversation.com/the-slippery-slope-of-using-ai-and-deepfakes-to-bring-history-to-life-166464
reavivar a história
To mark Israel’s Memorial Day in 2021, the Israel Defense Forces musical ensembles collaborated with a company that specialises in synthetic videos, also known as “deepfake” technology, to bring photos from the 1948 Israeli-Arab war to life. They produced a video in which young singers clad in period uniforms and carrying period weapons sang “Hareut”, an iconic song commemorating soldiers killed in combat. As they sing, the musicians stare at faded black-and-white photographs they hold. The young soldiers in the old pictures blink and smile back at them, thanks to artificial intelligence Live reenactments and carefully processed historical footage are expensive and time-consuming undertakings. Deepfake technology democratises such efforts, offering a cheap and widely available tool for animating old photos or creating convincing fake videos from scratch. But as with all new technologies, alongside the exciting possibilities are serious moral questions. And the questions get even trickier when these new tools are used to enhance understanding of the past and reanimate historical episodes. https://scroll.in/article/1009721/ai-and-deepfakes-are-bringing-history-to-life-but-at-a-high-moral-cost
A heart-warming commercial starring one of the most beloved actors of Turkish cinema appeared on Turkish television at the beginning of 2021. Thanks to deepfake technology, Kemal Sunal was able to star in a recent Ziraat Bank commercial more than 20 years after his passing. While this put a smile on the faces of many spectators, it has also given rise to a series of copyright questions.
There is global debate over whether copyright should subsist in works generated by artificial intelligence (“AI”) systems or whether a sui generis right should be granted to AI-generated works. Another heated topic of discussion is whether AI should be granted legal personality, which would enable the AI to be considered the author of the work. In this article, we will focus on the more specific question of who holds the copyright on deepfakes generated by AI with human intervention under the current laws.1 https://www.lexology.com/library/detail.aspx?g=ef6e81a2-422b-44ee-b640-d536b8080044 +
Deepfake advertisements a new frontier in marketing in India - but they have raised concerns https://www.straitstimes.com/asia/south-asia/deepfake-advertisements-a-new-frontier-in-marketing-in-india-but-they-have-raisedAdobe has built a deepfake tool, but it doesn’t know what to do with it, Deepfakes unlock creative possibilities but also nasty use cases. https://www.theverge.com/2021/10/27/22748508/adobe-deepfake-tool-max-project-morpheus
MORTOS 'Frank Sinatra' was crooning about hot tubs in 2020, more than 20 years after his death. However, it was only as a deepfake that the iconic voice of Ol' Blue Eyes sang, "Ohh, it's hot tub time" over a warbled backing track of piano and horns. The song, titled Hot Tub Christmas, is the product of artificial intelligence algorithms developed by San Francisco company OpenAI, which counts Microsoft among its investors. https://www.abc.net.au/news/2021-09-08/music-deepfakes-audio-singer-voice-death/100412494
Mediante la tecnología conocida como Deep Fake, la fundación logró recrear al intérprete Luis Alberto del Paraná en un videoclip, interpretando la canción Asunción y recorriendo la actual fachada de la ciudad. “La finalidad del proyecto es hacer un rescate cultural de artistas nacionales con el que se mezcla el pasado con el presente. Si bien las filmaciones se hacen ahora, el rostro es de la persona en sí”, explicó en comunicación con Última Hora Rodrigo Ríos, uno de los responsables. https://www.ultimahora.com/reviven-luis-alberto-del-parana-inteligencia-artificial-n2956243.html
An MTV Lebanon video commemorating the victims of the Beirut port blast has been branded as insensitive by social media users.
Titled “A Letter to the Lebanese Judiciary,” the video was shared on MTV online platforms alongside the hashtag, “it’s been a year, the time is up.”
It featured deepfakes of two victims of the devastating Aug. 4 explosion, Ralph Mallahi and Amin Al-Zahed, speaking directly to the camera while pictures and clips from the blast were shown for context. https://www.arabnews.com/node/1901606/media
In 2019, Google released Translatotron, an AI system capable of directly translating a person’s voice into another language. The system could create synthesized translations of voices to keep the sound of the original speaker’s voice intact. But Translatotron could also be used to generate speech in a different voice, making it ripe for potential misuse in, for example, deepfakes. This week, researchers at Google quietly released a paper detailing Translatotron’s successor, Translatotron 2, which solves the original issue with Translatotron by restricting the system to retain the source speaker’s voice. Moreover, Translatotron 2 outperforms the original Translatotron by “a large margin” in terms of translation quality and naturalness, as well as “drastically” cutting down on undesirable artifacts, like babbling and long pauses. https://venturebeat.com/2021/07/23/googles-translatotron-2-removes-ability-to-deepfake-voices/
Anthony Bourdain’s former partner Ottavia Bourdain has responded to the interview given by Morgan Neville to deny giving any permission for the filmmakers to use deepfakes of the late chef’s voice. In a tweet, she wrote “I certainly was NOT the one who said Tony would have been cool with that.” It remains unconfirmed whether other members of Anthony Bourdain’s family gave their consent for the project. https://hypebeast.com/2021/7/roadrunner-a-film-about-anthony-bourdain-documentary-deepfake-audio
Meredith Broussard, a New York University journalism professor and author of the book Artificial Unintelligence: How Computers Misunderstand the World, says it’s understandable that many find Bourdain’s audio clone deeply unsettling. “I’m not surprised that his widow doesn’t feel like she gave permission for this,” she says. “It’s such new technology that nobody really expects that it’s going to be used this way.” Using AI in journalism poses the greater ethical dilemma, Broussard said.”People are more forgiving when we use this kind of technology in fiction as opposed to documentaries,” she explains. “In a documentary, people feel like it’s real and so they feel duped.” https://qz.com/2034784/the-anthony-bourdain-documentary-and-the-ethics-of-audio-deepfakes/
MMA fighter Ottavia Busia, Anthony Bourdain’s widow, revealed that she did not give the team behind the “Roadrunner” documentary permission to “deepfake” the late chef’s voice.
“I certainly was NOT the one who said Tony would have been cool with that,” Busia tweeted on July 16, countering filmmaker Morgan Neville’s claim in an interview with GQ Read more: https://entertainment.inquirer.net/416886/anthony-bourdains-widow-denies-giving-permission-for-voice-deepfake#ixzz71vKznowV
Faux songs created from the original voices of star artists are becoming more popular (and more convincing), leading to murky questions of morality and legality.
El ejemplo más reciente ocurrió en México, cuando la cadena de supermercados “Soriana” utilizó la técnica del deepfake para traer a la vida a Mario Moreno, “Cantinflas”, actor y comediante de cine fallecido en 1993, con motivo de su campaña “La de todos los mexicanos”, que ha recibido comentarios muy positivos de la audiencia casi de forma unánime.
New deepfake technology allows Robert De Niro to deliver his famous line from Taxi Driverin flawless German—with realistic lip movements and facial expressions. The AI software manipulates an actor's lips and facial expressions to make them convincingly match the speech of someone speaking the same lines in a different language. The artificial intelligence-based tech could reshape the movie industry, in both alluring and troubling ways.
The technology is related to deepfaking, which uses AI to paste one person's face onto someone else. It promises to allow directors to effectively reshoot movies in different languages, making foreign versions less jarring for audiences and more faithful to the original. But the power to automatically alter an actor's face so easily might also prove controversial if not used carefully.
The AI dubbing technology was developed by Flawless, a UK company cofounded by the director Scott Mann, who says he became tired of seeing poor foreign dubbing in his films.
https://arstechnica.com/gaming/2021/05/robert-de-niro-speaks-fluent-german-in-taxi-driver-thanks-to-ai/#p3
Deepfake’s flexible nature means content can be personalised, creating a fully rewarding experience for the consumer. There’s already a huge appetite for conversational connection, as shown by the runaway success of the celebrity video messaging platform, Cameo. Deepfake technology furthers this opportunity for personal dialogue while using only a fraction of the talent’s time. From chatbots to voice AI technology, brands often investigate implementing technology that reduces costs and improves customer engagement across digital products. Deepfake is the new kid on the block, giving brands warm humanity, helping them move from cold automation to intimate customer communication. https://www.campaignlive.com/article/deepfakes-digital-avatars-new-celebrity-brand-ambassadors/1713070?DCMP=EMC-CONTheCampaignFix&bulletin=the-campaign-fix
jan21
Há três dias, a 11 de janeiro, a Reuters noticiava que um vídeo de Donald Trump em que ele reconhecia a vitória de Joe Biden não era um "Deepfake confirmado". No entanto, quando se ia ao "fact check" do Facebook, este dizia tratar-se de um vídeo falso. Os jornalistas da Reuters, depois de nitidamente terem ficado numa situação em que não sabiam bem em quem acreditar - afinal fonte oficial da Casa Branca não é de fiar; e nenhuma fonte considerável credível afirma, de facto, ser uma gravação manipulada - tiveram de recorrer a especialistas de análise de vídeo para concluir que se tratará de uma gravação legítima. Só que, aparentemente, o Facebook não recebeu o "memorando"...
Como a própria Reuters escreve, o vídeo foi difundido por media tradicionais, tais como esta mesma agência, a CNN e a Fox News.
DEZ20
RAINHA DE INGLATERRA (bom? mau? pedagogia?)
warning viewers to question "whether what we see and hear is always what it seems."
Channel 4 said the video was created as a "stark warning" about technology and the proliferation of fake news.
No comments:
Post a Comment