feb24
As part of that larger project, I did this investigation of deepfakes to try to understand, epistemically, what was going on — exactly why they are a threat to knowledge. In what respect do they cause false beliefs, and in what respect do they prevent people from acquiring true beliefs? And even when people are able to acquire true beliefs, do they interfere with their acquiring justified beliefs, so that it’s not knowledge. We use videos as a way of accessing information about the world that we otherwise would be too far away from — either in time or in space — to access. Videos and photographs have this benefit of expanding the realm of the world that we can acquire knowledge about. But if people are easily able to make something that looks like it’s a genuine photograph or video, then that reduces the information flow that you can get through those sources. Now the obvious problem is, for example, that somebody fakes a video of some event, and the user is either misled into thinking that that event occurred, or led into skepticism about it because of the ubiquity of, say, deepfakes.
https://news.northeastern.edu/2024/02/12/magazine/ai-deepfake-images-online-deception/
dez23
Generative AI introduces a daunting new reality: inconvenient truths can be denied as deepfaked, or at least facilitate claims of plausible deniability to evade accountability. The burden of proof, or perhaps more accurately, the “burden of truth” has shifted onto those circulating authentic content and holding the powerful to account. This is not just a crisis of identifying what is fake. It is also a crisis of protecting what is true. When anything and everything can be dismissed as AI-generated or manipulated, how do we elevate the real stories of those defending our democracy at the frontlines?
https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines
dez23
The Liar’s Dividend: Can Politicians Claim
Misinformation to Evade Accountability?
Kaylyn Jackson Schiff∗† Daniel S. Schiff‡ Natália S. Bueno§
October 19, 2023
Abstract
This study addresses the phenomenon of misinformation about misinformation, or
politicians “crying wolf” over fake news. Strategic and false claims that stories are
fake news or deepfakes may benefit politicians by helping them maintain support after
a scandal. We posit that this benefit, known as the “liar’s dividend,” may be achieved
through two politician strategies: by invoking informational uncertainty or by encouraging
oppositional rallying of core supporters. We administer five survey experiments
to over 15,000 American adults detailing hypothetical politician responses to stories
describing real politician scandals. We find that claims of misinformation representing
both strategies raise politician support across partisan subgroups. These strategies
are effective against text-based reports of scandals, but are largely ineffective against
video evidence and do not reduce general trust in media. Finally, these false claims
produce greater dividends for politicians than alternative responses to scandal, such as
remaining silent or apologizing.
The Liar's Dividend: Can Politicians Claim Misinformation to Evade Accountability?
Delfino
THE RISE OF DEEPFAKES AND THE “LIAR’S DIVIDEND”
Deepfakes are fabricated audiovisual content created or altered to appear to the observer to be a genuine account of the speech, conduct, image, or likeness of an individual or an event.13 They create a fake reality by superimposing a person’s face on another’s body or changing the contents of one’s speech.14 A combination of “deep learning” and “fake,” so-called “deepfake” programs use AI to produce these fake audio-visual images.15 This new technology, developed and unleashed on the internet in late 2017, allows anyone with a smartphone to believably map another’s movements and words onto someone else’s face and voice to make them appear to say or do anything.16 And the more video and audio of the person fed into the computer’s deep-learning algorithms, the more convincing the result.17
(continua)
THE DEEPFAKE DEFENSE—EXPLORING THE LIMITS OF THE LAW AND ETHICAL NORMS IN PROTECTING LEGAL PROCEEDINGS FROM LYING LAWYERS
Professor Rebecca A. Delfino*
LMU Loyola Law School, Los Angeles
~
citron e chesney 2019
The Liar’s Dividend: Beware the Cry of Deep-Fake News
We conclude our survey of the harms associated with deep fakes by
flagging another possibility, one different in kind from those noted above. In
each of the preceding examples, the harm stems directly from the use of a deep
fake to convince people that fictional things really occurred. But not all lies
involve affirmative claims that something occurred (that never did): some of the
most dangerous lies take the form of denials.
Deep fakes will make it easier for liars to deny the truth in distinct ways. A
person accused of having said or done something might create doubt about the
accusation by using altered video or audio evidence that appears to contradict
the claim. This would be a high-risk strategy, though less so in situations where
the media is not involved and where no one else seems likely to have the
technical capacity to expose the fraud. In situations of resource-inequality, we
may see deep fakes used to escape accountability for the truth.
Deep fakes will prove useful in escaping the truth in another equally
pernicious way. Ironically, liars aiming to dodge responsibility for their real
words and actions will become more credible as the public becomes more
educated about the threats posed by deep fakes. Imagine a situation in which an
accusation is supported by genuine video or audio evidence. As the public
becomes more aware of the idea that video and audio can be convincingly faked,
some will try to escape accountability for their actions by denouncing authentic
video and audio as deep fakes. Put simply: a skeptical public will be primed to
doubt the authenticity of real audio and video evidence. This skepticism can be
invoked just as well against authentic as against adulterated content.
Hence what we call the liar’s dividend: this dividend flows, perversely, in
proportion to success in educating the public about the dangers of deep fakes.
The liar’s dividend would run with the grain of larger trends involving truth
skepticism. Most notably, recent years have seen mounting distrust of traditional
sources of news. That distrust has been stoked relentlessly by President Trump
and like-minded sources in television and radio; the mantra “fake news” has
become an instantly recognized shorthand for a host of propositions about the
supposed corruption and bias of a wide array of journalists, and a useful
substitute for argument when confronted with damaging factual assertions.
Whether one labels this collection of attitudes postmodernist or nihilist,138 the
138. For a useful summary of that debate, see Thomas B. Edsall, Is President Trump a Stealth
Postmodernist or Just a Liar?, N.Y. TIMES (Jan. 25, 2018),
https://www.nytimes.com/2018/01/25/opinion/trump-postmodernism-lies.html
[https://perma.cc/DN7F-AEPA].
Electronic copy available at: https://ssrn.com/abstract=3213954
1786 CALIFORNIA LAW REVIEW [Vol. 107:1753
fact remains that it has made substantial inroads into public opinion in recent
years.
Against that backdrop, it is not difficult to see how “fake news” will extend
to “deep-fake news” in the future. As deep fakes become widespread, the public
may have difficulty believing what their eyes or ears are telling them—even
when the information is real. In turn, the spread of deep fakes threatens to erode
the trust necessary for democracy to function effectively.139
The combination of truth decay and trust decay accordingly creates greater
space for authoritarianism. Authoritarian regimes and leaders with authoritarian
tendencies benefit when objective truths lose their power.140 If the public loses
faith in what they hear and see and truth becomes a matter of opinion, then power
flows to those whose opinions are most prominent—empowering authorities
along the way.141
Cognitive bias will reinforce these unhealthy dynamics. As Part II explored,
people tend to believe facts that accord with our preexisting beliefs.142 As
research shows, people often ignore information that contradicts their beliefs and
interpret ambiguous evidence as consistent with their beliefs.143 People are also
inclined to accept information that pleases them when given the choice.144
Growing appreciation that deep fakes exist may provide a convenient excuse for
motivated reasoners to embrace these dynamics, even when confronted with
information that is in fact true.
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3213954
jul23 (?)
is already so high.
Preparing for the Liar’s Dividend’s consequences is not just a theoretical exercise: in June 2019 compromising footage of the Malaysian Minister of Economic Affairs Azmin Ali was allegedly captured on video. In response to the accusations, Ali claimed that the video was a deepfake, something that was not able to be confirmed by experts who examined the footage.26
It is not difficult to imagine similar scenarios playing out in the 2020 U.S. election. What would happen if something like the infamous “Access Hollywood” tape, where Donald Trump is heard making disparaging and offensive comments about women in an audio recording, were to happen in the 2020 election?27
During one of the workshops, participants engaged in the above thought experiment and after describing the scenario, the person leading the experiment observed that “In that moment in the past, the very media artifact was the truth,” but with the Liar’s Dividend, “I get to claim any video is fake,” even if it’s been authenticated as real, the newsroom manager said. “The next Access Hollywood tape will be challenged as a deepfake” even if it hasn’t been manipulated or distorted.
It is important to remember that not everyone has to believe a deepfake is real for it to have an effect. After all, we know that creating doubt is a major strategy of disinformation campaigns,28 and now, with the mere existence of synthetic media, it is even more difficult to know what and whom to trust. Malicious actors can capitalize on the Liar’s Dividend by creating deepfakes that are just convincing enough to make people question whether it is real. Furthermore, for a malicious deepfake to have an effect, it doesn’t need to reach massive audiences. As one of the participants in the workshops noted, “When I say viral, it doesn’t need to be millions of views.”
Continua
DEEPFAKES IN THE 2020 ELECTIONS AND BEYOND: Lessons From the 2020 Workshop Series
jun23
For years now, academics have feared the rise of increasingly convincingly looking deepfake video could make it easier for bad actors to brush aside real videos as potentially manipulated. These scenarios, which some have dubbed “the liars dividend” creates a particularly insidious situation in jury trial settings where attorneys for either the defense or prosecution simply need to instill some doubt into a jury’s mind. If deepfake are indistinguishable from reality and present everywhere one looks, how can anyone confidently claim any single video is true?
8 Times ‘Deepfake’ Videos Were Actually Real
The rise of convincing and ubiquitous deepfake technology is leading to scenarios where bad actors deny reality in court.
ma23
La convergence des deepfakes et de l’effet Mandela soulève des questions sur la confiance que nous accordons à nos souvenirs et aux médias numériques. Les deepfakes peuvent être utilisés pour altérer des événements historiques et semer la confusion parmi les masses, renforçant ainsi l’effet Mandela. La combinaison de ces deux phénomènes pourrait conduire à une société où il devient de plus en plus difficile de distinguer la réalité de la fiction.
https://www.thmmagazine.fr/deepfakes-et-effet-mandela-lart-de-la-manipulation-numerique/
Turkish presidential candidate quits race after release of alleged sex tape
Muharrem İnce pulls out just days from close election race saying alleged sex tape is deepfake
https://www.theguardian.com/world/2023/may/11/muharrem-ince-turkish-presidential-candidate-withdraws-alleged-sex-tape
People are trying to claim real videos are deepfakes. The courts are not amused
In several recent high-profile trials, defendants have sought to cast doubt on the reliability of video evidence by suggesting that artificial intelligence may have surreptitiously altered the videos.
These challenges are the most notable examples yet of defendants leveraging the growing prevalence in society of AI-manipulated media — often called deepfakes — to question evidence that, until recently, many thought was nearly unassailable.
There are two central concerns about deepfakes in the courtroom. First, as manipulated media becomes more realistic and harder to detect, the risk increases of falsified evidence finding its way into the record and causing an unjust result.
nov22
Remember Will Smith's brilliant performance in that remake of
? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.
Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.
https://www.irishexaminer.com/news/arid-41006312.htmlThe Liar’s Dividend: The Impact of Deepfakes and Fake News on Politician Support and Trust in Media
In June 2019, a grainy video proliferated throughout Malaysian social media channels that allegedly showed the country’s Economic Affairs Minister, Mohamed Azmin Ali, having sex with a younger staffer named Muhammad Haziq Abdul Aziz. Although Azmin insisted that the video was fake and part of a “nefarious plot” to derail his political career, Abdul Aziz proceeded to post a video on Facebook ‘confessing’ that he was the man in the video and calling for an investigation into Azmin. The ensuing controversy threw the country into uproar. Azmin kept his job, though, after the prime minister declared that the video was likely a deepfake—a claim several experts have since disputed.
Was the low-quality, hidden-camera video genuine? No one knows, except its creators. But just like in the case of the laptop, the truth was inaccessible—and into that ambiguity stepped powerful parties with their own versions of truth. The chaos engendered by the Hunter Biden laptop’s unverifiability provides a window into a future in which a more sophisticated piece of visual disinformation wreaks unimaginable, unstoppable havoc. https://brownpoliticalreview.org/2021/11/hunters-laptop-deepfakes-and-the-arbitration-of-truth/
Deepfakes make it easier to dismiss footage of critical events. The “Infopocalypse, the world is collapsing, you can’t believe anything you see,” rhetoric that we’ve all heard in recent years has been deeply harmful. Deepfakes have gotten easier to make. However, they’re still not prevalent in the really sophisticated way that I think many people believe they are. There’s what we call the “liar’s dividend,” which is the ability to easily dismiss something true as a deepfake. We’ve seen this in some high-profile cases recently. Witness was involved in a case in Myanmar, involving the former chief minister of Yangon. A video was released by the military government in March this year, in which he appears to accuse Aung San Suu Kyi, the country’s de facto leader before the February 2021 coup, of corruption. In the video, he looks very static, his mouth looks out of sync and he sounds unlike his normal self. People saw this and were like, “It’s a deepfake.” We’re seeing this happen towards video evidence globally. Increasingly, people are just saying, “You can’t believe anything.” We need to be investing in equity of access to tools that authenticate videos and debunk fakes, so they are available broadly.
https://www.codastory.com/authoritarian-tech/decentralized-web-human-rights/
Finally, there is social adjustment. Behavioral scientists talk about educating Internet users to pre-bunk misleading content rather than trying to unravel it once it’s posted. The greatest danger due to profound content is probably that it inflates the ‘liar’s dividend’ so that everyone can question everything and devalue the currency of truth. As US Congressman Adam Schiff explained, “fake videos can not only be portrayed as real, but real information can be passed on as fake.”
It is the malleability of technology and the ingenuity of the human mind, that the war against harmful depths can never be won. However, the impact of deep lies depends on the ways in which it is used, the contexts in which it is placed, and the extent of the audience they reach. These are grounds that are still worth disputing. FT https://afegames.com/deepfakes-threaten-to-inflate-the-liars-dividend/
Moreover, the sheer possibility of deepfakes would create a plausible deniability
of anything reported or recorded. Thereby doubts sown by deepfakes could
permanently alter our trust in audio and video. For instance, in 2018, Cameroon’s
minister of communication dismissed as fake a video Amnesty International thinks
shows Cameroonian soldiers executing civilians.32 Similarly, Donald Trump, who
in a recorded conversation boasted about grabbing women’s genitals, later
claimed the tape was fake. He thereby enabled his followers to take that stance.33
Such denials would then be among the multifarious voices on an issue, making it
ever harder to motivate people to scrutinize their beliefs.
Videos speak to the human brain in a much more immediate manner than text - the 'I have seen it with my own eyes' phenomena.And worse, the pure possibility to manipulate video, may cause anything to be questioned. A genuine video can be dismissed as deepfake or a manipulated one trumpeted as genuine.Over time the public attitude may become "disbelief by default", as Sam Gregory of the NGO WITNESS noted. https://euobserver.com/opinion/151935
"My biggest concern is not the abuse of deepfakes, but the implication of entering a world where any image, video, audio can be manipulated. In this world, if anything can be fake, then nothing has to be real, and anyone can conveniently dismiss inconvenient facts" as synthetic media, Hany Farid, an AI and deepfakes researcher and associate dean of UC Berkeley's School of Information, told Insider. That paradox is known as the "liar's dividend," a name given to it by law professors Danielle Citron and Robert Chesney. Many of the harms that deepfakes can cause — such as deep porn, cyberbullying, corporate espionage, and political misinformation — stem from bad actors using deepfakes to "convince people that fictional things really occurred," Citron and Chesney wrote in a 2018 research paper. https://www.businessinsider.com/deepfakes-liars-dividend-explained-future-misinformation-social-media-fake-news-2021-4
What does this have to do with Deepfakes? It’s simple: if we’re so easily manipulated through tidbits of exposure to tiny little ads in our Facebook feed, imagine what could happen if advertisers started hijacking the personas and visages of people we trust? If you can convince someone that the people they respect and care about believe they’ve done something wrong, it’s easier for them to accept it as a fact. How many law enforcement agencies in the world currently have an explicit policy against using manipulated media in the solicitation of a confession? Our guess would be: close to zero. With Deepfakes and enough time, you could convince someone of just about anything as long as you can figure out a way to get them to watch your videos.
https://thenextweb.com/neural/2021/03/23/how-deepfakes-could-help-implant-false-memories-in-our-minds/
While authentication protects against false negatives, there remains the possibility – even with allowances, such as to permit facial blurring – that it could produce false positives for honestly edited footage. In the immediate future, neither detection nor authentication can offer an absolutely watertight way to flag up deepfakes as they multiply online. (...) “What worries me more than fully faked videos is small alternations to a video like changing the lapel on a military uniform to make it from one country to another,” said Sam Dubberley, a University of Essex researcher and adviser to Amnesty International’s Evidence Lab. “It’s easy enough to [debunk] if it’s Trump or Putin but if it’s a Rohingya village chief and there’s WhatsApp audio saying “All the Rohingya must rise up and burn down the police state” that’s going to be impossible to prove where it came from.”
Deepfakes have arrived at moment in which – for many people – the truth can be whatever they want it to be. The mere existence of deepfakes aggravates this atmosphere of tribalism and distrust in which any evidence can be dismissed as fake: this problem has been described by academics as the “liar’s dividend”.
“The worry I have is that deepfakes are a way of creating chaos in the current disinformation climate […] but also they’ll create some sort of plausible deniability and that’s what I see as being the major aim,” said Professor Lilian Edwards, an internet law and policy expert based at Newcastle University. “It’s a chaotic aim.”
Edwards cites Trump’s claim that his pussy-grabbing audio tape was faked as an example of politicians weaponising the plausible deniability legitimised by synthetic media. It is likely that those who believe Trump’s denial were already prepared to accept as fact whatever confirms their tribalistic beliefs, and it is easy to dismiss these people as a long-lost cause who did not need more excuses to choose their own facts. However, there is evidence that the liar’s dividend specifically associated with deepfakes (and the misconception that creating realistic deepfakes is trivial) is already a potent threat in parts of the world.
In Gabon in late 2018, a video of President Ali Bongo (who had temporarily stopped making public appearances due to ill health) was shared, and its unusual appearance led an opposition politician to joke and then seriously argue that the video was a deepfake. Days later, members of Gabon’s military attempted a coup, citing the video’s appearance as evidence that affairs were not as they should be. In Malaysia last year, a gay sex tape allegedly featuring the Minister of Economic Affairs and a rival minister’s aide circulated online. While the aide swore that the video was real, the minister and Prime Minister dismissed it as a deepfake, allowing the minister to get away without the serious legal consequences he may otherwise expect in the socially conservative country.
Subsequent analysis seemed to conclude that neither of the two videos were deepfakes, but damage was done regardless: “Awareness of deepfakes alone is destabilising political processes by undermining the perceived objectivity of videos featuring politicians and public figures,” Deeptrace wrote in its 2019 report. Western democracies like the UK, many of which have seen their institutions weakened in the past few years, cannot take for granted that they are invulnerable to destabilisation.
The erosion of trust in video caused by deepfakes is likely to resonate beyond the sphere of politics, as the impact of fake news is felt across healthcare, science, and other areas. Video evidence fuels social movements like Black Lives Matter and the Hong Kong Pro-Democracy Protests, and is increasingly significant in courts of law, with the European Human Rights Advocacy Centre submitting video in its litigation regarding the 2014 annexation of Crimea. LINK
(ver ALI BONGO)
This dynamic is exacerbated by what researchers term the ‘liar’s dividend’: that is, efforts to debunk misinformation or propaganda can make it more difficult for audiences to trust all sources of information. This underscores the need for effective policy responses to weaponised deep fakes. Governments must act early to reassure the public that they’re responding to the challenges of weaponised deep fakes, lest panic or credulity outstrip the impact of the fakes. https://www.aspi.org.au/report/weaponised-deep-fakes
Então, como a verdade pode emergir num mercado repleto de deepfakes? Vamos então apenas pegar o caminho de menor resistência, acreditar no que quisermos acreditar, e dane-se a verdade? E podemos não apenas acreditar na mentira, mas começar a não acreditar na verdade. Já vimos pessoas recorrendo ao fenômeno dos deepfakes para lançar dúvidas em evidências reais de suas transgressões. Vimos políticos falarem sobre o áudio de seus comentários constrangedores: “Vamos lá, são notícias falsas. Não podem acreditar no que seus olhos e ouvidos estão dizendo a você”. E é esse perigo que o professor Robert Chesney e eu chamamos de “dividendo do mentiroso”: o risco de que os mentirosos recorram a deepfakes para escapar da responsabilização por seus erros. LINK
No comments:
Post a Comment