Monday, April 29, 2024

the medium is the message" still applies—perhaps now more than ever.

MAR24

What are some ways to spot deepfakes?

In the near term, you can still often trust your instincts about deepfakes. The mouth moves out of sync with the body, or reflections are at a different frame rate, etc.

In the medium term, we can use deepfake detection software, but it's an , and the accuracy will likely decline over time as deepfake algorithms improve.

In the long term, deepfakes may eventually become indistinguishable from real imagery. When that day comes, we can no longer rely on detection as a strategy. So, what do we have left that AI cannot deepfake? Here are two things: physical reality itself and strong cryptography, which is about strongly and verifiably connecting data to a digital identity.

Cryptography is what we use to keep browsing histories private, passwords secret, and it lets you prove you're you. The modern internet could not exist without it. In the world of computation, AI is just an algorithm like all others and cryptography is designed to be hard for any algorithm to break.

We are still able to link a physical entity (a person) to a strong notion of digital identity. This suggests that 'is this a ?' may not be the right question we should be asking.

If 'is this a deepfake' is the wrong question, what is the right one?

 The right questions to ask are: Where is this image coming from? Who is the source? How can I tell?

The sophistication of deepfakes may eventually evolve to the point where we can no longer distinguish between a real photo and an algorithmically generated fantasy.

In this world, the focus of the conversation should be less on the content of the image but on where it came from, i.e., the source, the communication channel, the medium. In that sense, Marshall McLuhan's old wisdom that "the medium is the message" still applies—perhaps now more than ever.

https://techxplore.com/news/2024-04-deepfake-wrong.html

Sunday, April 7, 2024

Redes sociais (II desde mar24)

 Apr24

Meta, the parent company of Facebook, unveiled significant revisions to its guidelines concerning digitally produced and altered media on Friday, just ahead of the impending US elections that will serve as a test for its capacity to manage deceptive content stemming from emerging artificial intelligence technologies. According to Monika Bickert, Vice President of Content Policy at Meta, the social media behemoth will commence the application of “Made with AI” labels starting in May.

These labels will be affixed to AI-generated videos, images, and audio shared across Meta's platforms. This initiative marks an expansion of their existing policy, which had previously only addressed a limited subset of manipulated videos.

https://news.abplive.com/technology/meta-deepfakes-altered-media-us-presidential-elections-policy-change-strict-guidelines-1678040

Friday, April 5, 2024

AI clones

 Fev24

Several terms have been used interchangeably for AI clones: AI replica, agent, digital twin, persona, personality, avatar, or virtual human. AI clones for people who have died have been called thanabots, griefbots, deadbots, deathbots and ghostbots, but there is so far no uniform term that specifies AI clones of living people. Deepfake is the term used for when an AI-altered or -generated image is misused to deceive others or spread disinformation.

Since 2019, I have interviewed hundreds of people about their views on digital twins or AI clones through my performance Elixir: Digital Immortality, based on a fictional tech startup that offers AI clones. The general audience response was one of curiosity, concern, and caution. A recent study similarly highlights three areas of concerns about having an AI clone: "doppelgänger-phobia," identity fragmentation, and false living memories.

Potential Harmful Psychological Effects of AI Clones

https://www.psychologytoday.com/us/blog/urban-survival/202401/the-psychological-effects-of-ai-clones-and-deepfakes