Thursday, January 13, 2022

Na China

abr23

Midjourney, an AI image generator that creates realistic deepfakes, has been scrutinized recently for having a policy showing deference to China's communist government.

The company enforces a rule that users can generate fake images of world leaders from President Biden to Vladimir Putin, but not Chinese President Xi Jinping.

In a year-old message on the chat service Discord, the CEO of Midjourney, Inc. explained why the company has that rule.

"I think we want to minimize drama," Midjourney CEO David Holz wrote last summer. He explained that the company did not immediately ban images of Xi, but it was triggered by abuse from users.

https://www.foxnews.com/tech/ai-image-generator-midjourney-bans-deepfakes-china-xi-jinping-minimize-drama

mar23

O governo da China criou uma legislação específica para combater a divulgação de informações falsas por meio da internet, principalmente nas redes sociais. A nova regulamentação, denominada "Disposições sobre a Administração de Síntese Profunda de Serviços de Informações Baseados na Internet", pretende evitar a disseminação da informação falsa criada por inteligência artificial, a deepfake, em vídeos e imagens.


natanaelginting/freepik

deepfake permite que uma foto ou um vídeo de uma pessoa seja substituído pela imagem de outra pessoa, além de alterar a voz, trocar o texto falado e dar novo sentido ao contexto do vídeo, que pode ser interpretado como real — situação muito utilizada no Brasil, principalmente durante a campanha eleitoral do ano passado.  

Para diferenciar as informações reais das falsas, os vídeos criados ou editados a partir de IA na China devem exibir pequenas etiquetas por marca d'água dispostas num dos cantos da imagem. Esse rótulo deve alertar que a produção utilizou o sistema artificial.

https://www.conjur.com.br/2023-abr-08/china-cria-lei-informacoes-falsas-meio-deepfakes

fev23

The "news broadcasters" appear stunningly real, but they are AI-generated deepfakes in first-of-their-kind propaganda videos that a research report published Tuesday attributed to Chinese state-aligned actors.

The fake anchors — for a fictitious news outlet called Wolf News — were created by artificial intelligence software and appeared in footage on social media that seemed to promote the interests of the Chinese Communist Party, U.S.-based research firm Graphika said in its report.

"This is the first time we've seen a state-aligned operation use AI-generated video footage of a fictitious person to create deceptive political content," Jack Stubbs, vice president of intelligence at Graphika, told AFP.

In one video analyzed by Graphika, a fictitious male anchor who calls himself Alex critiques U.S. inaction over gun violence plaguing the country. In the second, a female anchor stresses the importance of "great power cooperation" between China and the United States.

Advancements in AI have stoked global alarm over the technology's potential for disinformation and misuse, with deepfake images created out of thin air and people shown mouthing things they never said.

Last year, Facebook owner Meta said it took down a deepfake video of Ukrainian President Volodymyr Zelenskyy urging citizens to lay down their weapons and surrender to Russia.

There was no immediate comment from China on Graphika's report, which comes just weeks after Beijing adopted expansive rules to regulate deepfakes.

China enforced new rules last month that will require businesses offering deepfake services to obtain the real identities of their users. They also require deepfake content to be appropriately tagged to avoid "any confusion."

The Chinese government has warned that deepfakes present a "danger to national security and social stability."

Graphika's report said the two Wolf News anchors were almost certainly created using technology provided by the London-based AI startup Synthesia.

The website of Synthesia, which did not immediately respond to AFP's request for comment, advertises software for creating deepfake avatars "based on video footage of real actors."

Graphika said it discovered the deepfakes on platforms including Twitter, Facebook and YouTube while tracking pro-China disinformation operations known as "spamouflage."

"Spamouflage is a pro-Chinese influence operation that predominantly amplifies low-quality political spam videos," said Stubbs.

"Despite using some sophisticated technology, these latest videos are much the same. This shows the limitations of using deepfakes in influence operations—they are just one tool in an increasingly advanced toolbox."

https://www.voanews.com/a/research-deepfake-news-anchors-in-pro-china-footage/6953588.html


jan23

China’s cyberspace regulator is cracking down on deepfakes.

Starting tomorrow (Jan. 10), deep synthesis providers–content providers that alter text, audio, images, and video—in China will have to abide by a new set of rules, according to the Cyberspace Administration of China (CAC).

“In recent years, deep synthesis technology has developed rapidly. While serving user needs and improving user experience, it has also been used by some unscrupulous people to produce, copy, publish, and disseminate illegal and harmful information, to slander and belittle others’ reputation and honor, and to counterfeit others’ identities,” the CAC said.

https://qz.com/china-new-rules-deepfakes-consent-disclosure-1849964709

Six years ago, memes comparing Xi Jinping to Winnie the Pooh spread like wildfire across China’s internet before being snuffed out by the country’s censors. Creating and disseminating more sophisticated digital imagery of the honey-loving bear could now earn you a prison term in the country, as a new deepfakes law called the ‘Provisions on the Administration of Deep Synthesis of Internet Information Services’ comes into effect this week. As nations around the world mull over regulations to target one of the most disruptive media technologies in recent years, Beijing is preparing to wage a new war on any online content it considers to be a threat to its stability and legitimacy in the eyes of the Chinese people. 

China is not the only nation to consider new regulations on deepfakes. Both the UK and Taiwanese governments have announced their intention to ban the creation and sharing of deepfake pornographic videos without consent, with similar legislation being proposed in the US at the federal level (several states have already passed such laws.) The latest regulations in China, however, extend to any deepfake content, imposing new rules on its creation, dissemination and labelling – in effect, going much further in scope and detail than most other existing national legislation concerning synthetic audio and video.

https://techmonitor.ai/technology/emerging-technology/china-is-about-to-pass-the-worlds-most-comprehensive-law-on-deepfakes

 jan22

China's cyberspace regulator issued draft rules on Friday (Jan 28) for content providers that alter facial and voice data, the latest measure to crack down on "deepfakes" and mould a cyberspace that promotes Chinese socialist values.

The rules are aimed at further regulating technologies such as those using algorithms to generate and modify text, audio, images and videos, according to documents published on the website of the Cyberspace Administration of China.

Any platform or company that uses deep learning or virtual reality to alter any online content, what the CAC calls "deep synthesis service providers", will now be expected to "respect social morality and ethics, adhere to the correct political direction". https://www.straitstimes.com/asia/east-asia/china-issues-draft-rules-for-deepfakes-in-cyberspace + https://www.techtimes.com/articles/271191/20220129/chinese-regulators-propose-rules-crack-down-deepfakes-want-promote-socialist.htm

China’s Cyberspace Administration released a draft rule that would place new oversight obligations on providers of deepfake technology. The regulation would cover "deep synthesis internet information services,” including any technology that generates text, images, audio, videos or virtual scenes based on deep learning. Popular AI tools like GPT-3 would be covered under the rule. https://www.protocol.com/bulletins/china-deepfake-regulation


jan22

One authentic-looking video of autocrat-in-chief Xi Jinping frolicking nude through a meadow is arguably a bigger threat to the ruling Communist Party than Uyghurs.

So, it is not surprising that Beijing would begin a robust regulatory (and, later, social pressure) campaign to neuter deepfakes before they can darken public sentiment toward government leaders.

New rules, reportedly unique in the world, go into effect as early as March 1.

The effort ultimately could even lead to bureaucrats reading code to identify video, still-image and audio deepfakes. (Human parsing continues to be researched outside China, too.)

And that is just in China.

If Beijing successfully beats back deepfakes, governments of all stripes around the world will plunge ahead with similar campaigns to control fraudulent biometric development. At least some less-scrupulous governments likely would turn that control against dissidents, opposition politicians and rival nations.

China faces a herculean task, but few in the 1990s thought China could effectively wall off the global Internet to preserve the country’s hive mind.

The first step in the anti-deepfake program is here: a lengthy set of new regulations governing recommendation engines on content sites. Those are the personalized prompts that algorithms create to keep people on a site and consuming information.

Deepfakes can show up anywhere, of course, including in primary content, which leads to personalized prompts.

Targeting just recommendation engines could be a recognition on Beijing’s part that the battle against unauthorized deepfakes within its borders will be complex and maybe costly. It will need to be fought in pieces.

One of the regulators busy at this stage of the campaign is the Cyberspace Administration of China. Its translated, 35-article document can be found here.

https://www.biometricupdate.com/202201/chinas-next-bogeyman-a-deepfake-video-that-makes-its-people-think

Tuesday, January 4, 2022

Porque funcionam

1gos23

Humans are able to detect artificially generated speech only 73% of the time, a study has found, with the same levels of accuracy found in English and Mandarin speakers.

Researchers at University College London used a text-to-speech algorithm trained on two publicly available datasets, one in English and the other in Mandarin, to generate 50 deepfake speech samples in each language.

Deepfakes, a form of generative artificial intelligence, are synthetic media that is created to resemble a real person’s voice or the likeness of their appearance.

The sound samples were played for 529 participants to see whether they could detect the real sample from fake speech. The participants were able to identify fake speech only 73% of the time. This number improved slightly after participants received training to recognise aspects of deepfake speech.

https://www.theguardian.com/technology/2023/aug/02/humans-can-detect-deepfake-speech-only-73-of-the-time-study-finds

jul23

FUNCIONAN?

Recently, a team of researchers undertook a study about deepfake videos and determined that deepfake clips of movie remakes that don’t actually exist influenced participants to falsely remember the non-existent films.

However, simple text descriptions of the fake movies also prompted similar false memory rates, the study states.

https://interestingengineering.com/culture/deepfake-movies-distorting-memory-text-descriptions

dez22

Artificial Intelligence (AI)-powered Deepfakes are responsible for new challenges in consumers’ visual experience, and pose a wide range of negative consequences (i.e., non-consensual intimate imagery, political dis/misinformation, financial fraud, and cybersecurity issues) for individuals, societies, and organizations. Research suggested legislation, corporate policies, anti-Deepfake technology, education, and training to combat Deepfakes including the usage of synthetic media to raise awareness so that people can become more critical in detection when evaluating these contents in the future. To educate and raise awareness among the college-going students, this pilot survey study utilized both synthetic and real images over undergraduate students (N=19) to understand the human cognition and perception demonstrated by the literate population in detecting Deepfake media with their bare eyes. The results showed that human cognition and perception are insufficient in detecting synthetic media with their inexperienced eyes and even the intelligent population is vulnerable to this technology. While Deepfakes are becoming sophisticated and imperceptible, it was observed that this kind of survey study can be beneficial in raising awareness among the population about the societal impact of the technology and may also improve their detection ability for future encounters.

https://ieeexplore.ieee.org/abstract/document/9965697


nov22

Remember Will Smith's brilliant performance in that remake of The Matrix? Well, it turns out almost half the participants in a new study on 'deep fakes' believed fake remakes featuring different actors in old roles were real, highlighting the risk of 'false memory' created by online technology.

The study, carried out by researchers at the School of Applied Psychology in University College Cork, presented 436 people with deepfake video of fictitious movie remakes.

Deepfakes are manipulated media created using artificial intelligence technologies, where an artificial face has been superimposed onto another person’s face, resulting in a highly convincing recording of someone doing or saying something they never did.

https://www.irishexaminer.com/news/arid-41006312.html


Ag22

Deepfake detection loses accuracy somewhere between your brain and your mouth

A team of neuroscientists researching deepfakes at the University of Sydney have found that people seem to be able to identify them at a subconscious level more frequently than at a conscious one.

A new paper titled ‘Are you for real? Decoding realistic AI-generated faces from neural activity’ describes how brain activity reflects whether a presented image is real or fake more accurately than simply asking the person.

The researchers used electroencephalography (EEG) signals to measure the neurological responses of subjects, and found a consistent neural response associated with faces for approximately 170 milliseconds. When the face was real, and only then, the response was sustained beyond 400ms.

https://www.biometricupdate.com/202208/deepfake-detection-loses-accuracy-somewhere-between-your-brain-and-your-mouth

fev22

A new study published in the Proceedings of the National Academy of Sciences USA provides a measure of how far the technology has progressed. The results suggest that real humans can easily fall for machine-generated faces—and even interpret them as more trustworthy than the genuine article. “We found that not only are synthetic faces highly realistic, they are deemed more trustworthy than real faces,” says study co-author Hany Farid, a professor at the University of California, Berkeley. The result raises concerns that “these faces could be highly effective when used for nefarious purposes.” “We have indeed entered the world of dangerous deepfakes,” says Piotr Didyk, an associate professor at the University of Italian Switzerland in Lugano, who was not involved in the paper. The tools used to generate the study’s still images are already generally accessible. And although creating equally sophisticated video is more challenging, tools for it will probably soon be within general reach, Didyk contends.  (Humans Find AI-Generated Faces More Trustworthy Than the Real Thing. Viewers struggle to distinguish images of sophisticated machine-generated faces from actual humans) https://www.scientificamerican.com/article/humans-find-ai-generated-faces-more-trustworthy-than-the-real-thing/


jan22

As the technology continues to improve and fake videos proliferate, there is uncertainty

about how people will discern genuine from manipulated videos, and how this will affect

trust in online content. This paper conducts a pair of experiments aimed at gauging

the public's ability to detect deepfakes from ordinary videos, and the extent to which

content warnings improve detection of inauthentic videos. In the first experiment, we

consider capacity for detection in natural environments: that is, do people spot deep-

fakes when they encounter them without a content warning? In the second experiment,

we present the first evaluation of how warning labels affect capacity for detection, by

telling participants at least one of the videos they are to see is a deepfake and observing

the proportion of respondents who correctly identify the altered content. Our results

show that, without a warning, individuals are no more likely to notice anything out of

the ordinary when exposed to a deepfake video of neutral content (32.9%), compared

to a control group who viewed only authentic videos (34.1%). Second, warning labels

improve capacity for detection from 10.7% to 21.6%; while this is a substantial increase,

the overwhelming majority of respondents who receive the warning are still unable to

tell a deepfake from an unaltered video. A likely implication of this is that individuals,

lacking capacity to manually detect deepfakes, will need to rely on the policies set by

governments and technology companies around content moderation.

Do Content Warnings Help People Spot a Deepfake? Evidence from Two

Experiments (2022)


jan22

A research project is to examine whether deepfakes – AI-manipulated images, videos or audio – make courts less likely to trust evidence of human-rights violations gathered on mobile phones. A Swansea legal expert has been awarded €1.5 million to examine how public perceptions of deepfakes affect trust in user-generated evidence. https://www.lawsociety.ie/gazette/top-stories/2021/12-december/study-to-probe-how-deepfakes-undermine-video-evidence-credibility

JAn22

Individual Deep Fake Recognition Skills are Affected by Viewers’ Political Orientation, Agreement with Content and Device Used. 

Previous research suggested a close relationship between political attitudes and top-down perceptual and subsequent cognitive processing styles. In this study, we aimed to investigate the impact of political attitudes and agreement with the political message content on the individual’s deep fake recognition skills. In this study, 163 adults (72 females = 44.2%) judged a series of video clips with politicians’ statements across the political spectrum regarding their authenticity and their agreement with the message that was transported. Half of the presented videos were fabricated via lip-sync technology. In addition to the particular agreement to each statement made, more global political attitudes towards social and economic topics were assessed via the Social and Economic Conservatism Scale (SECS). Data analysis revealed robust negative associations between participants’ general and in particular social conservatism and their ability to recognize fabricated videos. This effect was pronounced where there was a specific agreement with the message content. Deep fakes watched on mobile phones and tablets were considerably less likely to be recognized as such compared to when watched on stationary computers.

(em arquivo DF)