Friday, October 11, 2024

THE CURRENT STATE OF PORNOGRAPHIC DEEPFAKES (2024)

 out24

THE CURRENT STATE OF PORNOGRAPHIC

DEEPFAKES

A Science and Technology Studies Perspective

Author: Jola Gockel

Abstract This paper analyses the current state of deepfake pornography from

a Science and Technology Studies (STS) viewpoint. Looking at the

phenomenon from a social constructivist perspective shows that

misogynistic power structures are embedded in certain deepfake

technologies and that deepfake pornography reflects and reinforces

such power structures. Additionally, the risk perspective points to

the need for effective (federal and global) legislation and to the need

for increased public awareness. Finally, the vulnerability perspective

reveals how not everyone is affected equally by the potential of being

featured in deepfake pornography, with celebrities having a higher

risk of being featured in deepfakes and private individuals experi-

encing greater difficulty disproving deepfakes of themselves. Impli-

cations and questions for future research are discussed.

Keywords: Deepfakes, Deepfake Pornography, Artificial Intelligence, Science and

Technology Studies

https://openjournals.maastrichtuniversity.nl/MJLA/article/download/1005/565

Friday, September 27, 2024

South Korea

out24

On Sept. 26, South Korea revised its law that criminalizes deepfake pornography. Now, it’s not just illegal to create and distribute this lewd digital material, but also to view it. Anyone found to possess, save, or even watch this content could face up to three years in jail or a $22,000 fine.

Deepfakes are AI mashups in which a person’s face or likeness is superimposed onto explicit content without their consent. It’s an issue that’s afflicted celebrities like Taylor Swift, but also private individuals targeted by people they know.

South Korea’s law is a particularly aggressive approach to combating a serious issue. It’s also a problem that’s much older than artificial intelligence itself. Fake nude images have been created with print photo cutouts as far back as the 19th century, but they have flourished in the computer age with PhotoShop and other photo-editing tools. And it’s a problem that’s only been supercharged by the rise and widespread availability of deep learning models in recent years. Deepfakes can be weaponized to embarrass, blackmail, or hurt people — typically, women — whether they’re famous or not.

While South Korea’s complete prohibition may seem attractive to those desperate to eliminate deepfakes, experts warn that such a ban — especially on viewing the material — is difficult to enforce and likely wouldn’t pass legal muster in the United States.

“I think some form of regulation is definitely needed in this space, and South Korea's approach is very comprehensive,” says Valerie Wirtschafter, a fellow at the Brookings Institution. “I do think it will be difficult to fully enforce just due to the global nature of the internet and the widespread availability of VPNs.”

In the US, at least 20 states have already passed laws addressing nonconsensual deepfakes, but they’re inconsistent. “Some are criminal in nature, others only allow for civil penalties. Some apply to all deepfakes, others only focus on deepfakes involving minors,” says Kevin Goldberg, a First Amendment specialist at the Freedom Forum.

“Creators and distributors take advantage of these inconsistencies and often tailor their actions to stay just on the right side of the law,” he adds. Additionally, many online abuses happen across state lines — if not across national borders — making state laws difficult to sue under.

Congress has introduced bills to tackle deepfakes, but none have yet passed. The Defiance Act, championed by Rep. Alexandria Ocasio-Cortez and Sens. Dick Durbin and Lindsey Graham, would create a civil right to action, allowing victims to sue people who create, distribute, or receive nonconsensual deepfakes. It passed the Senate in July but is still pending in the House.

But a full prohibition on sexually explicit deepfakes would likely run afoul of the First Amendment, which makes it very difficult for the government to ban speech — including explicit media.

“A similar law in the United States would be a complete nonstarter under the First Amendment,” says Corynne McSherry, legal director at the Electronic Frontier Foundation. She thinks that current US law should protect Americans from some harms of deepfakes, much of which could be defamatory, an invasion of privacy, or violate citizens’ rights to publicity.

Many states, including California, have a right of publicity law that allows individuals to sue if their likeness is being used without their consent, especially for commercial purposes. For a new law to take action on deepfakes and pass First Amendment scrutiny, it would need to be narrowly tailored to address a very specific harm without infringing on protected speech, something that McSherry says would be very hard to do.

Despite the tricky First Amendment challenges, there is growing recognition of the need for some form of regulation, Wirtschafter says. “It is one of the most pernicious and damaging uses of generative AI, and it disproportionately targets women.”

https://www.gzeromedia.com/gzero-ai/south-korea-banned-deepfakes-is-that-a-realistic-solution-for-the-us 


0AI is fuelling a deepfake porn crisis in South Korea. What’s behind it – and how can it be fixed?

Tuesday, August 13, 2024

NOVO TRABALHO



ago24

A systematic literature review on deepfake detection techniquesPublished: 02 August 2024(2024)
Cite this article

Multimedia Tools and ApplicationsAims and scopeSubmit manuscript

Vishal Kumar Sharma,
Rakesh Garg &
Quentin Caudron

Abstract

Big data analytics, computer vision, and human-level governance are key areas where deep learning has been impactful. However, its advancements have also led to concerns over privacy, democracy, and national security, particularly with the advent of deepfake technology. Deepfakes, a term coined in 2017, primarily involve face-swapping in videos. Initially easy to detect, rapid advancements in machine learning have made deepfakes increasingly realistic and challenging to distinguish from reality. Generative Adversarial Networks (GANs) and other deep learning methods are instrumental in creating deepfakes, leading to the development of applications like Faceapp and Fake App. These technological advancements, while impressive, pose significant risks to individual integrity and societal trust. Recognizing this, the necessity to develop systems capable of instantaneously identifying and assessing the authenticity of digital visual media has become paramount. This study aims to evaluate deepfake detection methods by discussing manipulations, optimizations, and enhancements of existing algorithms. It explores various datasets for image, video, and audio deepfake detection, including performance metrics to gauge detection algorithm effectiveness. Through a comprehensive review, this paper identifies gaps in current research, proposes future research directions, and provides a detailed quantitative and qualitative analysis of existing deepfake detection techniques. By consolidating existing literature and presenting new insights, this study serves as a valuable resource for researchers and practitioners aiming to advance the field of deepfake detection.~

https://link.springer.com/article/10.1007/s11042-024-19906-1

Friday, July 12, 2024

"Will Singapore succeed in banning deepfakes? "

set24


In a bid to shore up trust in public institutions, Singapore is considering making legislative changes that will enable candidates to flag deepfake videos of themselves during elections, Senior Minister of State for Digital Development and Information Janil Puthucheary says.

The city state joins other jurisdictions looking to clamp down on manipulated media, with its next general election to be held no later than November 2025.

Puthucheary, who was among panel speakers on Friday at the Lee Kuan Yew School of Public Policy’s Festival of Ideas, said the proposed safeguards would enable election candidates to report digitally manipulated content that realistically depicts them saying or doing something that they did not in fact say or do.

The proposed Elections (Integrity of Online Advertising) (Amendment) Bill grants the Returning Officer, who supervises elections, the power to issue corrective directions to publishers or service providers. Misrepresented candidates can also declare the truthfulness of their claims.

Failure to comply could result in fines, imprisonment, or both. Candidates who submit false or misleading information risk a fine or losing their seats.

During a panel on AI governance and disinformation, Puthucheary warned that how voters absorb information is an area “rife for manipulation for AI-driven tools”.

He added that the bill, which would be debated in parliament at the next available sitting, was intended to avoid situations in other countries where public trust had been eroded by deep fakes.

“It is, in terms of its operations, focused around the candidate … But it has, as its intent, the shoring up of public trust in information discourse platforms, the media, and that sense of institutions having a responsibility above and beyond their own narrow interests,” the minister said.

In recent years, Singapore has implemented legislation to tackle fake news and online misinformation. This includes controversial laws such as the Protection from Online Falsehoods and Manipulation Act (Pofma) and Foreign Interference (Countermeasures) Act, which allow authorities to take action against perceived falsehoods or foreign interference in local politics.

Puthucheary, who is also the Senior Minister of State for Health, addressed the difference between Pofma and the bill on Friday, saying that with Pofma, there was a “test around public interest and a requirement for an establishment of fact by an authoritative third party”.

He explained, for example, how someone might have a very different opinion of vaccines, but ultimately, an academic informing the Health Ministry could advise them that this was against the public interest.

Puthucheary said this was a very different process from an election, where the only people who could provide verification that something did not happen were the candidates themselves.

“The government is in no position to do so, a third party, an academic, a professor, research institution, is in no position to do so,” he said.

He noted that other organisations might have technological views about the material, but whether an act happened was something a candidate had “an interest in expressing, and then a responsibility to address in terms of providing the electorate the correct information”.

Puthucheary noted that high trust in institutions such as the government and the media was a “necessary precondition” for discussions on the electoral process, acknowledging that Singapore was privileged to have this trust.

https://www.scmp.com/week-asia/politics/article/3279357/singapore-seeks-fight-deepfakes-elections-new-laws-ahead-2025-polls

jul24

Singapore is exploring ways to regulate deepfake content and potentially introducing a temporary ban due to concerns that AI may significantly blur the lines between fact and fiction ahead of elections.  Minister for digital development and information Josephine Teo mentioned at the Reuters Next Apac conference that various countries, like South Korea, are implementing measures such as a 90-day ban on political AI content before elections.  However, Teo noted that Singapore's short election period requires different solutions. The country addresses AI-generated misinformation with laws like the Protection from Online Falsehoods and Manipulation Act (Pofma), which can be applied to AI-generated fake news.  Teo emphasised the need for regulations to close loopholes in the law regarding AI-created falsehoods. In addition, she discussed Singapore's ambition to become a global AI player, highlighting the importance of talent, data access frameworks, and expanding data centre capacity to support AI development. My take: As AI technology advances, the potential for misuse, especially in the political arena, is significant. For example, AI-generated videos featuring Singapore’s former prime minister, Lee Hsien Loong, discussing international issues and foreign leaders have emerged, and, more sinisterly, British female politicians have fallen victim to AI-generated fake pornography, with their faces used in explicit images.  These images, some of which have been online for years and garnered hundreds of thousands of views, range from crude Photoshops to more sophisticated deepfakes created with AI technology. On the other hand, during the recent Indian elections, politicians had no qualms about using deepfake audio and video of themselves to connect with voters, who were often unaware that they were interacting with a digital clone rather than the actual person. According to Wired, people living in rural areas frequently experience a heightened sense of importance when they receive personalised AI calls from individuals in high positions. Regardless of how politicians use AI, I would like to see the law agile enough to address the unique characteristics of AI-generated deepfakes, which can be more sophisticated and more challenging to detect than traditional misinformation. However, I acknowledge that balancing the need for stringent regulations with the flexibility to adapt to new technological developments can be difficult.

Read more at: https://www.campaignasia.com/article/tech-on-me-will-singapore-succeed-in-banning-deepfakes/497151

Monday, April 29, 2024

the medium is the message" still applies—perhaps now more than ever.

MAR24

What are some ways to spot deepfakes?

In the near term, you can still often trust your instincts about deepfakes. The mouth moves out of sync with the body, or reflections are at a different frame rate, etc.

In the medium term, we can use deepfake detection software, but it's an , and the accuracy will likely decline over time as deepfake algorithms improve.

In the long term, deepfakes may eventually become indistinguishable from real imagery. When that day comes, we can no longer rely on detection as a strategy. So, what do we have left that AI cannot deepfake? Here are two things: physical reality itself and strong cryptography, which is about strongly and verifiably connecting data to a digital identity.

Cryptography is what we use to keep browsing histories private, passwords secret, and it lets you prove you're you. The modern internet could not exist without it. In the world of computation, AI is just an algorithm like all others and cryptography is designed to be hard for any algorithm to break.

We are still able to link a physical entity (a person) to a strong notion of digital identity. This suggests that 'is this a ?' may not be the right question we should be asking.

If 'is this a deepfake' is the wrong question, what is the right one?

 The right questions to ask are: Where is this image coming from? Who is the source? How can I tell?

The sophistication of deepfakes may eventually evolve to the point where we can no longer distinguish between a real photo and an algorithmically generated fantasy.

In this world, the focus of the conversation should be less on the content of the image but on where it came from, i.e., the source, the communication channel, the medium. In that sense, Marshall McLuhan's old wisdom that "the medium is the message" still applies—perhaps now more than ever.

https://techxplore.com/news/2024-04-deepfake-wrong.html

Sunday, April 7, 2024

Redes sociais (II desde mar24)

 Apr24

Meta, the parent company of Facebook, unveiled significant revisions to its guidelines concerning digitally produced and altered media on Friday, just ahead of the impending US elections that will serve as a test for its capacity to manage deceptive content stemming from emerging artificial intelligence technologies. According to Monika Bickert, Vice President of Content Policy at Meta, the social media behemoth will commence the application of “Made with AI” labels starting in May.

These labels will be affixed to AI-generated videos, images, and audio shared across Meta's platforms. This initiative marks an expansion of their existing policy, which had previously only addressed a limited subset of manipulated videos.

https://news.abplive.com/technology/meta-deepfakes-altered-media-us-presidential-elections-policy-change-strict-guidelines-1678040

Friday, April 5, 2024

AI clones

 Fev24

Several terms have been used interchangeably for AI clones: AI replica, agent, digital twin, persona, personality, avatar, or virtual human. AI clones for people who have died have been called thanabots, griefbots, deadbots, deathbots and ghostbots, but there is so far no uniform term that specifies AI clones of living people. Deepfake is the term used for when an AI-altered or -generated image is misused to deceive others or spread disinformation.

Since 2019, I have interviewed hundreds of people about their views on digital twins or AI clones through my performance Elixir: Digital Immortality, based on a fictional tech startup that offers AI clones. The general audience response was one of curiosity, concern, and caution. A recent study similarly highlights three areas of concerns about having an AI clone: "doppelgänger-phobia," identity fragmentation, and false living memories.

Potential Harmful Psychological Effects of AI Clones

https://www.psychologytoday.com/us/blog/urban-survival/202401/the-psychological-effects-of-ai-clones-and-deepfakes