Monday, April 29, 2024

the medium is the message" still applies—perhaps now more than ever.

MAR24

What are some ways to spot deepfakes?

In the near term, you can still often trust your instincts about deepfakes. The mouth moves out of sync with the body, or reflections are at a different frame rate, etc.

In the medium term, we can use deepfake detection software, but it's an , and the accuracy will likely decline over time as deepfake algorithms improve.

In the long term, deepfakes may eventually become indistinguishable from real imagery. When that day comes, we can no longer rely on detection as a strategy. So, what do we have left that AI cannot deepfake? Here are two things: physical reality itself and strong cryptography, which is about strongly and verifiably connecting data to a digital identity.

Cryptography is what we use to keep browsing histories private, passwords secret, and it lets you prove you're you. The modern internet could not exist without it. In the world of computation, AI is just an algorithm like all others and cryptography is designed to be hard for any algorithm to break.

We are still able to link a physical entity (a person) to a strong notion of digital identity. This suggests that 'is this a ?' may not be the right question we should be asking.

If 'is this a deepfake' is the wrong question, what is the right one?

 The right questions to ask are: Where is this image coming from? Who is the source? How can I tell?

The sophistication of deepfakes may eventually evolve to the point where we can no longer distinguish between a real photo and an algorithmically generated fantasy.

In this world, the focus of the conversation should be less on the content of the image but on where it came from, i.e., the source, the communication channel, the medium. In that sense, Marshall McLuhan's old wisdom that "the medium is the message" still applies—perhaps now more than ever.

https://techxplore.com/news/2024-04-deepfake-wrong.html

Sunday, April 7, 2024

Redes sociais (II desde mar24)

 Apr24

Meta, the parent company of Facebook, unveiled significant revisions to its guidelines concerning digitally produced and altered media on Friday, just ahead of the impending US elections that will serve as a test for its capacity to manage deceptive content stemming from emerging artificial intelligence technologies. According to Monika Bickert, Vice President of Content Policy at Meta, the social media behemoth will commence the application of “Made with AI” labels starting in May.

These labels will be affixed to AI-generated videos, images, and audio shared across Meta's platforms. This initiative marks an expansion of their existing policy, which had previously only addressed a limited subset of manipulated videos.

https://news.abplive.com/technology/meta-deepfakes-altered-media-us-presidential-elections-policy-change-strict-guidelines-1678040

Friday, April 5, 2024

AI clones

 Fev24

Several terms have been used interchangeably for AI clones: AI replica, agent, digital twin, persona, personality, avatar, or virtual human. AI clones for people who have died have been called thanabots, griefbots, deadbots, deathbots and ghostbots, but there is so far no uniform term that specifies AI clones of living people. Deepfake is the term used for when an AI-altered or -generated image is misused to deceive others or spread disinformation.

Since 2019, I have interviewed hundreds of people about their views on digital twins or AI clones through my performance Elixir: Digital Immortality, based on a fictional tech startup that offers AI clones. The general audience response was one of curiosity, concern, and caution. A recent study similarly highlights three areas of concerns about having an AI clone: "doppelgänger-phobia," identity fragmentation, and false living memories.

Potential Harmful Psychological Effects of AI Clones

https://www.psychologytoday.com/us/blog/urban-survival/202401/the-psychological-effects-of-ai-clones-and-deepfakes

Monday, February 19, 2024

A AI 'remediou' o video!

feve24

 https://tek.sapo.pt/multimedia/artigos/uma-simples-frase-pode-criar-um-video-feito-por-ia-veja-o-que-a-sora-da-openai-pode-fazer?elqTrackId=4ddec5c3f5e04ea3904d618a1df138d7&elq=e67b83bdea8c4ff6b5489bd86ddca626&elqaid=10709&elqat=1&elqCampaignId=10270


Wednesday, February 14, 2024

combater tecnologia (II)

mar24

South Korea’s police forces are developing a new deepfake detection tool that they can use during criminal investigations.

The Korean National Police Agency (KNPA) announced on March 5, 2024 to South Korean press agency Yonhap that its National Office of Investigation (NOI) will deploy new software designed to detect whether video clips or image files have been manipulated using deepfake techniques.

Unlike most existing AI detection tools, traditionally trained on Western-based data, the model behind this new software was trained on 5.2 million pieces of data from 5400 Koreans and related figures. It adopts “the newest AI model to respond to new types of hoax videos that were not pretrained,” KNPA said.

https://www.infosecurity-magazine.com/news/south-korea-police-deepfake/


 fev24

Major technology companies signed a pact Friday to voluntarily adopt “reasonable precautions” to prevent artificial intelligence tools from being used to disrupt democratic elections around the world.

Executives from Adobe, Amazon, Google, IBM, Meta, Microsoft, OpenAI and TikTok gathered at the Munich Security Conference to announce a new framework for how they respond to AI-generated deepfakes that deliberately trick voters. Twelve other companies — including Elon Musk’s X — are also signing on to the accord.

“Everybody recognizes that no one tech company, no one government, no one civil society organization is able to deal with the advent of this technology and its possible nefarious use on their own,” said Nick Clegg, president of global affairs for Meta, the parent company of Facebook and Instagram, in an interview ahead of the summit.

https://apnews.com/article/ai-generated-election-deepfakes-munich-accord-meta-google-microsoft-tiktok-x-c40924ffc68c94fac74fa994c520fc06


fev24

Technology giants are planning a new industry “accord” to fight back against “deceptive artificial intelligence election content” that is threatening the integrity of major democratic elections across the world this year.

A draft Tech Accord, seen by POLITICO, showed technology companies want to work together to create tools like watermarks and detection techniques to spot, label and debunk “deepfake” AI-manipulated images and audio of public figures. The pledge also includes commitments to open up more about how the firms are fighting AI-generated disinformation on their platforms.

“We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders,” the draft reads.

https://www.politico.eu/article/tech-accord-industry-munich-security-conference-deepfake-ai-election-content/


Sunday, February 11, 2024

PNC usada com politicos e celebridades

mar24

Nearly 4,000 celebrities found to be victims of deepfake pornography

Channel 4 News finds 255 British people including its presenter Cathy Newman to have been doctored into explicit images

https://www.theguardian.com/technology/2024/mar/21/celebrities-victims-of-deepfake-pornography



 fev24

 CARA HUNTER, A NORTHERN IRISH POLITICIAN, WAS ONLY WEEKS away from the country’s 2022 legislative elections when she received a WhatsApp message from someone she didn’t know. The man quickly asked her if she was the woman in an explicit video — a 40-second clip that he shared with the then-24-year-old. Opening the video, Hunter was confronted with an AI-generated deepfake video of herself performing graphic sexual acts. Within days, the false clip had gone viral, and the Northern Irishwoman was bombarded with direct messages from men around the world with increasingly sexual and violent messages.

“It was a campaign to undermine me politically,” Hunter, who won her 2022 Northern Ireland Assembly seat by just a few votes, told me. “They had felt that, because they saw an explicit of someone who looked like me, it was OK to send me nasty messages. It has left a tarnished perception of me that I can’t control. I’ll have to pay for the repercussions of this for the rest of my life.”

Before we get into this, let’s be very clear: Deepfake pornography, unfortunately, is not new. It’s been around for almost a decade and almost entirely targets women. It regained public attention after Taylor Swift became the latest victim when AI-generated graphic images of her were created, mostly via a 4Chan message board, and then shared widely on X, formerly known as Twitter. I also don’t want to mansplain what every woman reading this already knows. This is all about power. Power to demean women; power to control how women can participate in public life; power to silence voices that men (and it’s almost entirely men) believe are not worthy. Don’t take my word for it. There’s been some great reporting on this, for years. (Herehere and here.) I could find no examples of male politicians targeted with such sexual abuse.

https://www.politico.eu/newsletter/digital-bridge/deepfake-porn-is-political-violence/

Tuesday, January 2, 2024

virtual rape of girl in metaverse

 jan24

Police investigate virtual rape of girl in metaverse

The girl is said to have been left distraught after her avatar - or digital character - was attacked online by several adult men in a virtual 'room'E

Detectives are investigating the first case of alleged rape in the metaverse after a child was “attacked” playing a virtual reality video game.

The girl, aged under 16, wasn’t physically injured as there was no physical assault. But she is said to have been left distraught after her avatar - or digital character - was attacked online by several adult men in a virtual “room”.

She had been wearing an immersive headset during the “attack”, the Daily Mail reported.

Police leaders are concerned she suffered the same psychological and emotional trauma as someone raped in the real world as the ‘VR’ experience is designed to be completely immersive.

Ian Critchley, the National Police Chiefs’ Council’s lead for child protection and abuse investigation, said: “The metaverse creates a gateway for predators to commit horrific crimes against children.

”The unusual case has prompted questions about whether the police should be pursuing online offences while they and prosecutors struggle with a backlog of real rape cases."

Details of the virtual reality case are said to have been kept secret to protect the child involved, amid fears a prosecution may never be possible.

One senior officer familiar with the case told the Mail: “This child experienced psychological trauma similar to that of someone who has been physically raped.

“There is an emotional and psychological impact on the victim that is longer term than any physical injuries. It poses a number of challenges for law enforcement, given [that] current legislation is not set up for this.”

Donna Jones, chairman of the Association of Police and Crime Commissioners, said women and children deserve greater protection, adding: “We need to update our laws because they have not kept pace with the risks of harm that are developing from artificial intelligence and offending on platforms like the metaverse.

LINK