Tuesday, December 22, 2020

Liar's dividend

feb24

As part of that larger project, I did this investigation of deepfakes to try to understand, epistemically, what was going on — exactly why they are a threat to knowledge. In what respect do they cause false beliefs, and in what respect do they prevent people from acquiring true beliefs? And even when people are able to acquire true beliefs, do they interfere with their acquiring justified beliefs, so that it’s not knowledge. We use videos as a way of accessing information about the world that we otherwise would be too far away from — either in time or in space — to access. Videos and photographs have this benefit of expanding the realm of the world that we can acquire knowledge about. But if people are easily able to make something that looks like it’s a genuine photograph or video, then that reduces the information flow that you can get through those sources. Now the obvious problem is, for example, that somebody fakes a video of some event, and the user is either misled into thinking that that event occurred, or led into skepticism about it because of the ubiquity of, say, deepfakes.

https://news.northeastern.edu/2024/02/12/magazine/ai-deepfake-images-online-deception/


dez23

Generative AI introduces a daunting new reality: inconvenient truths can be denied as deepfaked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?

https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines


dez23

The Liar’s Dividend: Can Politicians Claim

Misinformation to Evade Accountability?

Kaylyn Jackson Schiff∗† Daniel S. Schiff‡ Natália S. Bueno§

October 19, 2023

Abstract

This study addresses the phenomenon of misinformation about misinformation, or

politicians “crying wolf” over fake news. Strategic and false claims that stories are

fake news or deepfakes may benefit politicians by helping them maintain support after

a scandal. We posit that this benefit, known as the “liar’s dividend,” may be achieved

through two politician strategies: by invoking informational uncertainty or by encouraging

oppositional rallying of core supporters. We administer five survey experiments

to over 15,000 American adults detailing hypothetical politician responses to stories

describing real politician scandals. We find that claims of misinformation representing

both strategies raise politician support across partisan subgroups. These strategies

are effective against text-based reports of scandals, but are largely ineffective against

video evidence and do not reduce general trust in media. Finally, these false claims

produce greater dividends for politicians than alternative responses to scandal, such as

remaining silent or apologizing.

The Liar's Dividend: Can Politicians Claim Misinformation to Evade Accountability?


Delfino

THE RISE OF DEEPFAKES AND THE “LIAR’S DIVIDEND”

Deepfakes are fabricated audiovisual content created or altered to appear to the observer to be a genuine account of the speech, conduct, image, or likeness of an individual or an event.13 They create a fake reality by superimposing a person’s face on another’s body or changing the contents of one’s speech.14 A combination of “deep learning” and “fake,” so-called “deepfake” programs use AI to produce these fake audio-visual images.15 This new technology, developed and unleashed on the internet in late 2017, allows anyone with a smartphone to believably map another’s movements and words onto someone else’s face and voice to make them appear to say or do anything.16 And the more video and audio of the person fed into the computer’s deep-learning algorithms, the more convincing the result.17

(continua)

THE DEEPFAKE DEFENSE—EXPLORING THE LIMITS OF THE LAW AND ETHICAL NORMS IN PROTECTING LEGAL PROCEEDINGS FROM LYING LAWYERS

Professor Rebecca A. Delfino*

LMU Loyola Law School, Los Angeles

~



citron e chesney 2019

The Liar’s Dividend: Beware the Cry of Deep-Fake News

We conclude our survey of the harms associated with deep fakes by

flagging another possibility, one different in kind from those noted above. In

each of the preceding examples, the harm stems directly from the use of a deep

fake to convince people that fictional things really occurred. But not all lies

involve affirmative claims that something occurred (that never did): some of the

most dangerous lies take the form of denials.

Deep fakes will make it easier for liars to deny the truth in distinct ways. A

person accused of having said or done something might create doubt about the

accusation by using altered video or audio evidence that appears to contradict

the claim. This would be a high-risk strategy, though less so in situations where

the media is not involved and where no one else seems likely to have the

technical capacity to expose the fraud. In situations of resource-inequality, we

may see deep fakes used to escape accountability for the truth.

Deep fakes will prove useful in escaping the truth in another equally

pernicious way. Ironically, liars aiming to dodge responsibility for their real

words and actions will become more credible as the public becomes more

educated about the threats posed by deep fakes. Imagine a situation in which an

accusation is supported by genuine video or audio evidence. As the public

becomes more aware of the idea that video and audio can be convincingly faked,

some will try to escape accountability for their actions by denouncing authentic

video and audio as deep fakes. Put simply: a skeptical public will be primed to

doubt the authenticity of real audio and video evidence. This skepticism can be

invoked just as well against authentic as against adulterated content.

Hence what we call the liar’s dividend: this dividend flows, perversely, in

proportion to success in educating the public about the dangers of deep fakes.

The liar’s dividend would run with the grain of larger trends involving truth

skepticism. Most notably, recent years have seen mounting distrust of traditional

sources of news. That distrust has been stoked relentlessly by President Trump

and like-minded sources in television and radio; the mantra “fake news” has

become an instantly recognized shorthand for a host of propositions about the

supposed corruption and bias of a wide array of journalists, and a useful

substitute for argument when confronted with damaging factual assertions.

Whether one labels this collection of attitudes postmodernist or nihilist,138 the

138. For a useful summary of that debate, see Thomas B. Edsall, Is President Trump a Stealth

Postmodernist or Just a Liar?, N.Y. TIMES (Jan. 25, 2018),

https://www.nytimes.com/2018/01/25/opinion/trump-postmodernism-lies.html

[https://perma.cc/DN7F-AEPA].

Electronic copy available at: https://ssrn.com/abstract=3213954

1786 CALIFORNIA LAW REVIEW [Vol. 107:1753

fact remains that it has made substantial inroads into public opinion in recent

years.

Against that backdrop, it is not difficult to see how “fake news” will extend

to “deep-fake news” in the future. As deep fakes become widespread, the public

may have difficulty believing what their eyes or ears are telling them—even

when the information is real. In turn, the spread of deep fakes threatens to erode

the trust necessary for democracy to function effectively.139

The combination of truth decay and trust decay accordingly creates greater

space for authoritarianism. Authoritarian regimes and leaders with authoritarian

tendencies benefit when objective truths lose their power.140 If the public loses

faith in what they hear and see and truth becomes a matter of opinion, then power

flows to those whose opinions are most prominent—empowering authorities

along the way.141

Cognitive bias will reinforce these unhealthy dynamics. As Part II explored,

people tend to believe facts that accord with our preexisting beliefs.142 As

research shows, people often ignore information that contradicts their beliefs and

interpret ambiguous evidence as consistent with their beliefs.143 People are also

inclined to accept information that pleases them when given the choice.144

Growing appreciation that deep fakes exist may provide a convenient excuse for

motivated reasoners to embrace these dynamics, even when confronted with

information that is in fact true.

https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954


jul23 (?)

is already so high.

Preparing for the Liar’s Dividend’s consequences is not just a theoretical exercise: in June 2019 compromising footage of the Malaysian Minister of Economic Affairs Azmin Ali was allegedly captured on video. In response to the accusations, Ali claimed that the video was a deepfake, something that was not able to be confirmed by experts who examined the footage.26

It is not difficult to imagine similar scenarios playing out in the 2020 U.S. election. What would happen if something like the infamous “Access Hollywood” tape, where Donald Trump is heard making disparaging and offensive comments about women in an audio recording, were to happen in the 2020 election?27

During one of the workshops, participants engaged in the above thought experiment and after describing the scenario, the person leading the experiment observed that “In that moment in the past, the very media artifact was the truth,” but with the Liar’s Dividend, “I get to claim any video is fake,” even if it’s been authenticated as real, the newsroom manager said. “The next Access Hollywood tape will be challenged as a deepfake” even if it hasn’t been manipulated or distorted.

It is important to remember that not everyone has to believe a deepfake is real for it to have an effect. After all, we know that creating doubt is a major strategy of disinformation campaigns,28 and now, with the mere existence of synthetic media, it is even more difficult to know what and whom to trust. Malicious actors can capitalize on the Liar’s Dividend by creating deepfakes that are just convincing enough to make people question whether it is real. Furthermore, for a malicious deepfake to have an effect, it doesn’t need to reach massive audiences. As one of the participants in the workshops noted, “When I say viral, it doesn’t need to be millions of views.”

Continua

DEEPFAKES IN THE 2020 ELECTIONS AND BEYOND: Lessons From the 2020 Workshop Series



jun23

For years now, academics have feared the rise of increasingly convincingly looking deepfake video could make it easier for bad actors to brush aside real videos as potentially manipulated. These scenarios, which some have dubbed “the liars dividend” creates a particularly insidious situation in jury trial settings where attorneys for either the defense or prosecution simply need to instill some doubt into a jury’s mind. If deepfake are indistinguishable from reality and present everywhere one looks, how can anyone confidently claim any single video is true?

8 Times ‘Deepfake’ Videos Were Actually Real

The rise of convincing and ubiquitous deepfake technology is leading to scenarios where bad actors deny reality in court.

https://gizmodo.com/ai-deepfake-8-times-deepfake-videos-were-actually-real-1850520257

ma23

La convergence des deepfakes et de l’effet Mandela soulève des questions sur la confiance que nous accordons à nos souvenirs et aux médias numériques. Les deepfakes peuvent être utilisés pour altérer des événements historiques et semer la confusion parmi les masses, renforçant ainsi l’effet Mandela. La combinaison de ces deux phénomènes pourrait conduire à une société où il devient de plus en plus difficile de distinguer la réalité de la fiction.

https://www.thmmagazine.fr/deepfakes-et-effet-mandela-lart-de-la-manipulation-numerique/




mai23

Turkish presidential candidate quits race after release of alleged sex tape

Muharrem İnce pulls out just days from close election race saying alleged sex tape is deepfake

https://www.theguardian.com/world/2023/may/11/muharrem-ince-turkish-presidential-candidate-withdraws-alleged-sex-tape




mai23
People are trying to claim real videos are deepfakes. The courts are not amused
https://www.npr.org/2023/05/08/1174132413/people-are-trying-to-claim-real-videos-are-deepfakes-the-courts-are-not-amused

dez22

 In several recent high-profile trials, defendants have sought to cast doubt on the reliability of video evidence by suggesting that artificial intelligence may have surreptitiously altered the videos.

These challenges are the most notable examples yet of defendants leveraging the growing prevalence in society of AI-manipulated media — often called deepfakes — to question evidence that, until recently, many thought was nearly unassailable.

There are two central concerns about deepfakes in the courtroom. First, as manipulated media becomes more realistic and harder to detect, the risk increases of falsified evidence finding its way into the record and causing an unjust result.

Second, the mere existence of deepfakes makes it more likely the opposing party will challenge the integrity of evidence, even when they have a questionable basis for doing so. This phenomenon, when individuals play upon the existence of deepfakes to challenge the authenticity of genuine media by claiming it is forged, has become known as the "liar's dividend," a term coined by law professors Bobby Chesney and Danielle Citron.[1]
https://www.wilmerhale.com/en/insights/publications/20221221-the-other-side-says-your-evidence-is-a-deepfake-now-what


nov22

Remember Will Smith's brilliant performance in that remake of The Matrix? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.

The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.

Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.

https://www.irishexaminer.com/news/arid-41006312.html

jun22

The Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media

https://gvu.gatech.edu/research/projects/liars-dividend-impact-deepfakes-and-fake-news-politician-support-and-trust-media



mai22
This study addresses the phenomenon of misinformation about misinformation, or
politicians “crying wolf” over fake news. Strategic and false allegations that stories
are fake news or deepfakes may benefit politicians by helping them maintain support
in the face of information damaging to their reputation. We posit that this concept,
known as the “liar’s dividend,” works through two theoretical channels: by invoking
informational uncertainty or by encouraging oppositional rallying of core supporters.
To evaluate the implications of the liar’s dividend, we use three survey experiments
detailing hypothetical politician responses to video or text news stories depicting real
politician scandals. We find that allegations of misinformation raise politician support,
while potentially undermining trust in media. Moreover, these false claims produce
greater dividends for politicians than longstanding alternative responses to scandal,
such as remaining silent or apologizing. Finally, false allegations of misinformation pay
off less for videos (“deepfakes”) than text stories (“fake news”).
The Liar’s Dividend: Can Politicians Use Deepfakes
and Fake News to Evade Accountability?
Kaylyn Jackson Schiff∗, Daniel Schiff†, and Natália S. Bueno‡
May 10, 2022


fev22
There are numerous concerns that can be raised with respect to the rising prominence of deepfakes in the digital landscape, but perhaps the most alarming concern refers to the belief that deepfake technology will irreversible blur the lines between wat can be considered ‘real’ and what can be considered ‘fake.’ This concern is captured in popular discourse by the notion of the Infopocalypse. 
TESE http://essay.utwente.nl/89478/


dez21
If you’re thinking, “I would never fall for a deepfake,” you are not alone. In a recent open-access study by researchers from the Center for Humans and Machines (Max-Planck-Institute for Human Development) and CREED (University of Amsterdam), the vast majority of respondents showed high confidence in their abilities to detect deepfakes (see blue bars, Figure 1). However, these confidence levels significantly exceeded their actual detection abilities (see orange bars Figure 1). When left to their own devices — their eyes and ears —, people could no longer reliably detect deepfakes. In fact, even when financially incentivized to accurately gauge their detection abilities, they are overconfident in them (Köbis, Doležalová & Soraperra, 2021). https://www.psychologytoday.com/gb/blog/decisions-in-context/202112/the-psychology-deepfakes

nov21


In June 2019, a grainy video proliferated throughout Malaysian social media channels that allegedly showed the country’s Economic Affairs Minister, Mohamed Azmin Ali, having sex with a younger staffer named Muhammad Haziq Abdul Aziz. Although Azmin insisted that the video was fake and part of a “nefarious plot” to derail his political career, Abdul Aziz proceeded to post a video on Facebook ‘confessing’ that he was the man in the video and calling for an investigation into Azmin. The ensuing controversy threw the country into uproar. Azmin kept his job, though, after the prime minister declared that the video was likely a deepfake—a claim several experts have since disputed.
 Was the low-quality, hidden-camera video genuine? No one knows, except its creators. But just like in the case of the laptop, the truth was inaccessible—and into that ambiguity stepped powerful parties with their own versions of truth. The chaos engendered by the Hunter Biden laptop’s unverifiability provides a window into a future in which a more sophisticated piece of visual disinformation wreaks unimaginable, unstoppable havoc. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/

nov21
We sat down with Sam Gregory, program director at Witness.
About a decade ago, we started to see new challenges to the integrity of video. For instance, people would question the stories that clips were telling and say, “That wasn’t filmed in that place, it was filmed in this place.” So, we partnered with an organization called The Guardian Project to start building tools like ProofMode, which adds rich metadata to videos and photographs and cryptographically signs that piece of media to increase verifiability, show if an image has been changed and provide a chain of custody. 

Deepfakes make it easier to dismiss footage of critical events. The “Infopocalypse, the world is collapsing, you can’t believe anything you see,” rhetoric that we’ve all heard in recent years has been deeply harmful. Deepfakes have gotten easier to make. However, they’re still not prevalent in the really sophisticated way that I think many people believe they are. There’s what we call the “liar’s dividend,” which is the ability to easily dismiss something true as a deepfake. We’ve seen this in some high-profile cases recently. Witness was involved in a case in Myanmar, involving the former chief minister of Yangon. A video was released by the military government in March this year, in which he appears to accuse Aung San Suu Kyi, the country’s de facto leader before the February 2021 coup, of corruption. In the video, he looks very static, his mouth looks out of sync and he sounds unlike his normal self. People saw this and were like, “It’s a deepfake.” We’re seeing this happen towards video evidence globally. Increasingly, people are just saying, “You can’t believe anything.” We need to be investing in equity of access to tools that authenticate videos and debunk fakes, so they are available broadly.

 https://www.codastory.com/authoritarian-tech/decentralized-web-human-rights/


ag21
Acreditamos naquilo que queremos. O psicólogo e escritor Michael Brant Shermer, colunista da Scientific American, afirma: “As pessoas gostam de ser enganadas”. Ele afirma que o mundo real não é suficiente para o tamanho dos nossos anseios e da nossa imaginação. Queremos mais, sempre, pois o inexplicável nos fascina, o fantástico nos atrai, e a fantasia nos alegra. Toda ficção, incluindo filmes de super-heróis, nos surpreende, ativa nossa criatividade e nos tira da rotina. O perigo está em quando não sabemos distinguir e não controlamos o que é mentira ou realidade. Ou quando inocentes são prejudicados. (...) Já foi o tempo do ditado que dizia que tão culpado quanto o enganador é aquele que se deixa enganar. Como Shermer defende, “a mágica nos entretém e nos ludibria justamente porque queremos ser iludidos”. Infelizmente, não evoluímos para duvidar ou desenvolver uma visão crítica mais apurada, e o deepfake confunde e inibe o nosso viés de ceticismo. https://www.portaldosjornalistas.com.br/inteligencia-artificial-deepfake-ou-matrix/

jul21

Finally, there is social adjustment. Behavioral scientists talk about educating Internet users to pre-bunk misleading content rather than trying to unravel it once it’s posted. The greatest danger due to profound content is probably that it inflates the ‘liar’s dividend’ so that everyone can question everything and devalue the currency of truth. As US Congressman Adam Schiff explained, “fake videos can not only be portrayed as real, but real information can be passed on as fake.”

It is the malleability of technology and the ingenuity of the human mind, that the war against harmful depths can never be won. However, the impact of deep lies depends on the ways in which it is used, the contexts in which it is placed, and the extent of the audience they reach. These are grounds that are still worth disputing. FT https://afegames.com/deepfakes-threaten-to-inflate-the-liars-dividend/ 


jun21
VISÃO ALTERNATIVA?
Deepfake is not only incapable of contributing to such an end, but also offers a unique opportunity to transition towards a framework of social trust better suited for the challenges entailed by the digital age.
https://link.springer.com/article/10.1007/s43681-021-00072-1 

jun21
Moreover, the sheer possibility of deepfakes would create a plausible deniability
of anything reported or recorded. Thereby doubts sown by deepfakes could
permanently alter our trust in audio and video. For instance, in 2018, Cameroon’s
minister of communication dismissed as fake a video Amnesty International thinks
shows Cameroonian soldiers executing civilians.32 Similarly, Donald Trump, who
in a recorded conversation boasted about grabbing women’s genitals, later
claimed the tape was fake. He thereby enabled his followers to take that stance.33
Such denials would then be among the multifarious voices on an issue, making it
ever harder to motivate people to scrutinize their beliefs.
Catherine Kerner and Mathias Risse*
Beyond Porn and Discreditation: Epistemic
Promises and Perils of Deepfake Technology
in Digital Lifeworlds
https://doi.org/10.1515/mopp-2020-0024
Published online November 12, 2020

jun21

Videos speak to the human brain in a much more immediate manner than text - the 'I have seen it with my own eyes' phenomena.And worse, the pure possibility to manipulate video, may cause anything to be questioned. A genuine video can be dismissed as deepfake or a manipulated one trumpeted as genuine.Over time the public attitude may become "disbelief by default", as Sam Gregory of the NGO WITNESS noted. https://euobserver.com/opinion/151935


mai21
‘Deepfake’ that supposedly fooled European politicians was just a look-alike, say pranksters. Fear of deepfakes seems to have outpaced the technology itself. https://www.theverge.com/2021/4/30/22407264/deepfake-european-polticians-leonid-volkov-vovan-lexus

aprb21

"My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts" as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley's School of Information, told Insider. That paradox is known as the "liar's dividend," a name given to it by law professors Danielle Citron and Robert Chesney. Many of the harms that deepfakes can cause — such as deep porncyberbullyingcorporate espionage, and political misinformation — stem from bad actors using deepfakes to "convince people that fictional things really occurred," Citron and Chesney wrote in a 2018 research paper. https://www.businessinsider.com/deepfakes-liars-dividend-explained-future-misinformation-social-media-fake-news-2021-4 


mar21
The junta’s attempt to prove the graft allegation against detained State Counselor Daw Aung San Suu Kyi by showing a video clip of the detained Chief Minister of the Yangon Region who is alleged to have bribed her has met with public skepticism. Many citizens doubted the authenticity of the video, as the chief minister’s lip movements were not synchronized with the audio.
https://www.irrawaddy.com/news/burma/myanmar-junta-accused-using-deepfake-technology-prove-graft-case-daw-aung-san-suu-kyi.html


mar21
How Deepfakes could help implant false memories in our minds
Basically, it’s relatively easy to implant false memories. Getting rid of them is the hard part. The study was conducted on 52 subjects who agreed to allow the researchers to attempt to plant a false childhood memory in their minds over several sessions. After awhile, many of the subjects began to believe the false memories. The researchers then asked the subjects’ parents to claim the false stories were true. The researchers discovered that the addition of a trusted person made it easier to both embed and remove false memories. Per the paper: "The present study therefore not only replicates and extends previous demonstrations of false memories but, crucially, documents their reversibility after the fact: Employing two ecologically valid strategies, we show that rich but false autobiographical memories can mostly be undone. Importantly, reversal was specific to false memories (i.e., did not occur for true memories)."

What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and visages of people we trust? If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact. How many law enforcement agencies in the world currently have an explicit policy against using manipulated media in the solicitation of a confession? Our guess would be: close to zero. With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos. 

https://thenextweb.com/neural/2021/03/23/how-deepfakes-could-help-implant-false-memories-in-our-minds/

 

https://thenextweb.com/neural/2021/03/23/how-deepfakes-could-help-implant-false-memories-in-our-minds/

mar21
Results indicated that Deepfaked videos and audio have a strong psychological impact on the viewer,
and are just as effective in biasing their attitudes and intentions as genuine content.
Many people are unaware that Deepfaking is possible; find it difficult to detect when
they are being exposed to it; and most importantly, neither awareness nor detection
serves to protect people from its influence.
Deepfaked online content is highly effective in manipulating people's attitudes and intentions

fev21
Their testimony also pinpointed how claims of deepfakery and video manipulation were being increasingly used for what law professors Danielle Citron and Bobby Chesney call the “liar’s dividend,” the ability of the powerful to claim plausible deniability on incriminating footage. Statements like “It’s a deepfake” or “It’s been manipulated” have often been used to disparage a leaked video of a compromising situation or to attack one of the few sources of civilian power in authoritarian regimes: the credibility of smartphone footage of state violence. This builds on histories of state-sponsored deception. In Myanmar, the army and authorities have repeatedly both shared fake images themselves and challenged the veracity and integrity of real evidence of human rights violations.
wired.com/story/opinion-authoritarian-regimes-could-exploit-cries-of-deepfake/


jan21
A recent video posted to Twitter by President Donald Trump has some users suspicious that it's a deepfake, underscoring the difficulty in detecting what's real and what's fake in social media videos and highlighting erosion of public trust in government and the media.
https://searchenterpriseai.techtarget.com/news/252494572/Doubts-about-Trump-video-show-how-hard-deepfakes-are-to-detect


ABril20

While authentication protects against false negatives, there remains the possibility – even with allowances, such as to permit facial blurring – that it could produce false positives for honestly edited footage. In the immediate future, neither detection nor authentication can offer an absolutely watertight way to flag up deepfakes as they multiply online. (...) “What worries me more than fully faked videos is small alternations to a video like changing the lapel on a military uniform to make it from one country to another,” said Sam Dubberley, a University of Essex researcher and adviser to Amnesty International’s Evidence Lab. “It’s easy enough to [debunk] if it’s Trump or Putin but if it’s a Rohingya village chief and there’s WhatsApp audio saying “All the Rohingya must rise up and burn down the police state” that’s going to be impossible to prove where it came from.”
Deepfakes have arrived at moment in which – for many people – the truth can be whatever they want it to be. The mere existence of deepfakes aggravates this atmosphere of tribalism and distrust in which any evidence can be dismissed as fake: this problem has been described by academics as the “liar’s dividend”.
“The worry I have is that deepfakes are a way of creating chaos in the current disinformation climate […] but also they’ll create some sort of plausible deniability and that’s what I see as being the major aim,” said Professor Lilian Edwards, an internet law and policy expert based at Newcastle University. “It’s a chaotic aim.”
Edwards cites Trump’s claim that his pussy-grabbing audio tape was faked as an example of politicians weaponising the plausible deniability legitimised by synthetic media. It is likely that those who believe Trump’s denial were already prepared to accept as fact whatever confirms their tribalistic beliefs, and it is easy to dismiss these people as a long-lost cause who did not need more excuses to choose their own facts. However, there is evidence that the liar’s dividend specifically associated with deepfakes (and the misconception that creating realistic deepfakes is trivial) is already a potent threat in parts of the world.
In Gabon in late 2018, a video of President Ali Bongo (who had temporarily stopped making public appearances due to ill health) was shared, and its unusual appearance led an opposition politician to joke and then seriously argue that the video was a deepfake. Days later, members of Gabon’s military attempted a coup, citing the video’s appearance as evidence that affairs were not as they should be. In Malaysia last year, a gay sex tape allegedly featuring the Minister of Economic Affairs and a rival minister’s aide circulated online. While the aide swore that the video was real, the minister and Prime Minister dismissed it as a deepfake, allowing the minister to get away without the serious legal consequences he may otherwise expect in the socially conservative country.
Subsequent analysis seemed to conclude that neither of the two videos were deepfakes, but damage was done regardless: “Awareness of deepfakes alone is destabilising political processes by undermining the perceived objectivity of videos featuring politicians and public figures,” Deeptrace wrote in its 2019 report. Western democracies like the UK, many of which have seen their institutions weakened in the past few years, cannot take for granted that they are invulnerable to destabilisation.
The erosion of trust in video caused by deepfakes is likely to resonate beyond the sphere of politics, as the impact of fake news is felt across healthcare, science, and other areas. Video evidence fuels social movements like Black Lives Matter and the Hong Kong Pro-Democracy Protests, and is increasingly significant in courts of law, with the European Human Rights Advocacy Centre submitting video in its litigation regarding the 2014 annexation of Crimea. LINK


 (ver ALI BONGO)

This dynamic is exacerbated by what researchers term the ‘liar’s dividend’: that is, efforts to debunk misinformation or propaganda can make it more difficult for audiences to trust all sources of information. This underscores the need for effective policy responses to weaponised deep fakes. Governments must act early to reassure the public that they’re responding to the challenges of weaponised deep fakes, lest panic or credulity outstrip the impact of the fakes. https://www.aspi.org.au/report/weaponised-deep-fakes


Então, como a verdade pode emergir num mercado repleto de deepfakes? Vamos então apenas pegar o caminho de menor resistência, acreditar no que quisermos acreditar, e dane-se a verdade? E podemos não apenas acreditar na mentira, mas começar a não acreditar na verdade. Já vimos pessoas recorrendo ao fenômeno dos deepfakes para lançar dúvidas em evidências reais de suas transgressões. Vimos políticos falarem sobre o áudio de seus comentários constrangedores: “Vamos lá, são notícias falsas. Não podem acreditar no que seus olhos e ouvidos estão dizendo a você”. E é esse perigo que o professor Robert Chesney e eu chamamos de “dividendo do mentiroso”: o risco de que os mentirosos recorram a deepfakes para escapar da responsabilização por seus erros. LINK

No comments:

Post a Comment