Sunday, October 29, 2023

The Blockchain Solution

mar24

BLOCKCHAIN JUSTICE:

Exploring Decentralising Dispute Resolution Across Borders

Abstract

It is well known that the raison d'etre of Distributed Ledger Technology (DLT) is to enable peer-to-peer transactions that do not require Trusted Third Parties (TTP). Commercial security is a major concern for users in this new era: intermediaries are increasingly seen as security holes and removed from protocols as a result of a growing desire to maintain control over transactions. The need for independence from TTPs has evolved into a counterculture that moves blockchainers away from central authority, the courts and the world as we know it. To date, all existing online dispute resolution (ODR) processes in DLT and related tools such as smart contracts do not reflect the vision of blockchain as a counterculture. They exclusively use adjudicative methods involving one or more TTPs deciding via on-chain incentivised voting systems. This paper aims to discuss why non-adjudicative methods shall have a cultural priority over adjudicative ones, showing why they might be preferred by blockchainers due to risk management and distrust concerns. Furthermore, we introduce a prototype of a non-adjudicative ODR model (“Aspera”) in which users can have total control over the outcome of the dispute in a TTP free environment.

[PDF] Blockchain Justice: Exploring Decentralising Dispute Resolution Across Borders

C Poncibò, A Gangemi, GS Ravot - Journal of Law, Market & Innovation, 2024

mar24

Two months ago, media giant Fox Corp. partnered with Polygon Labs, the team behind the Ethereum-focused layer-2 blockchain, to tackle deepfake distrust.

Fox and Polygon launched Verify, a protocol that aims to protect their IP while letting consumers verify the authenticity of content. And since then, government regulatory committees, publishers and others have seen this as a viable solution to a “today problem,” Melody Hildebrandt, CTO of Fox Corp., said on TechCrunch’s Chain Reaction podcast.

Hildebrandt said she’s bullish that more news outlets, media companies and others will integrate this technology as AI technology goes into the mainstream. It may be beneficial to both AI companies and creators: The models gain knowledge and outlets and individuals can verify their work.

And it’s important for end users, who are uncertain about whether the content they’re consuming is trustworthy or not, Mike Blank, COO at Polygon Labs, said during the episode.

“There’s obviously the beat on the hill,” Hildebrandt said. Although most publishers want to participate in this type of ecosystem, they don’t want to sign away “all their crown jewels,” she added. This means imposing some technical guardrails that allow creators to get ahead, but still being able to maintain some optionality in the future.

https://techcrunch.com/2024/03/14/blockchain-tech-could-be-the-answer-to-uncovering-deepfakes-and-validating-content/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAALaAZSCTQdMczwORjo-l10C8PDdZoAyTu40zxJKutPUSwtPmcQYQNqSIlJ0B3sDmzRhpG32ApTDAymWpHDw38gpN4XKNnQie8FAabRNOrJC7ZH8GblkeLmKY7EXYnKpjmry8n5pc_7DNxOo1NwAga7RsOPOkzIDuAyceAqEPjV9A


dez23

The crux of the problem is that image-generating tools like DALL-E 2 and Midjourney make it easy for anyone to create realistic-but-fake photos of events that never happened, and similar tools exist for video. While the major generative-AI platforms have protocols to prevent people from creating fake photos or videos of real people, such as politicians, plenty of hackers delight in “jailbreaking” these systems and finding ways around the safety checks. And less-reputable platforms have fewer safeguards.

Against this backdrop, a few big media organizations are making a push to use the C2PA’s content credentials system to allow Internet users to check the manifests that accompany validated images and videos. Images that have been authenticated by the C2PA system can include a little “cr” icon in the corner; users can click on it to see whatever information is available for that image—when and how the image was created, who first published it, what tools they used to alter it, how it was altered, and so on. However, viewers will see that information only if they’re using a social-media platform or application that can read and display content-credential data.

https://spectrum.ieee.org/deepfakes-election


 out23

So, how do we combat this? One word: Blockchain. Yeah, the same technology that’s behind cryptocurrencies like Bitcoin can also be a safeguard against deepfakes.

To understand it more, you can get and read this book. It’s called Blockchain Bubble or Revolution: The Future of Bitcoin, Blockchains, and Cryptocurrencies.

Blockchain can create a secure, immutable record of digital assets, including videos and photos. If you’ve got a personal community on social media, you can use blockchain to verify the authenticity of content.

When a video is uploaded, it can be timestamped and recorded on a blockchain. If someone tries to pass off a deepfake as real, the blockchain record can be checked to confirm its legitimacy.

https://nicole-ven.medium.com/how-to-outsmart-deepfakes-the-one-solution-that-actually-works-2548eefa5455

Wednesday, October 4, 2023

eleições continuação

apr24

The Democratic Challenges of Deepfake Technology

https://consoc.org.uk/democratic-challenges-of-deepfake-tech/


mar24 GB

MPs are freaking out that artificial intelligence-powered deepfakes could influence Britain’s upcoming general election. They’re probably right.

Fears have only grown in recent months as Prime Minister Rishi Sunak, Labour Leader Keir Starmer and London Mayor Sadiq Khan have seen their identities spoofed in fake video or audio clips.

“It is one of those ‘the future is here’ moments,” former Justice Secretary Robert Buckland told POLITICO, adding that a “combination” of responses will be needed.
https://www.politico.eu/article/united-kingdom-deepfakes-election-rishi-sunak-keir-starmer-sadiq-khan/

fve24 Paquistão
One day before Pakistan's general election, deepfakes targeting Imran Khan's PTI started circulating on social media, where Khan and other PTI members allegedly call for an election boycott. We tell you more in this edition of Truth or Fake.
https://www.youtube.com/watch?v=j-byJ_pGknA

fev24

Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.

And if that wasn’t bad enough, his voice could be heard on another recording talking about raising the cost of beer.

The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.

While the number of votes swayed by the leaked audio remains uncertain, two things are now abundantly clear: The recordings were fake, created using artificial intelligence; and US officials see the episode in Europe as a frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election.
https://edition.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html

jan24

Rise of election deepfake risks

CNBC’s Julia Boorstin reports on the risks around deepfakes
https://www.cnbc.com/video/2024/01/18/rise-of-election-deepfake-risks.html

jan24
Richie Sunak
https://www.wionews.com/videos/uk-fake-rishi-sunak-ads-spark-concerns-facebook-flooded-with-deepfake-ads-of-uk-pm-680012

jan24
With elections due in India, Indonesia, Bangladesh and Pakistan in the coming weeks, misinformation is rife on social media platforms, with deepfakes - video or audio made using AI and broadcast as authentic - being particularly concerning, say tech ... In India, where more than 900 million people are eligible to vote, Modi has said deepfake videos are a "big concern", and authorities have warned social media platforms they could lose their safe-harbour status that protects them from liability for third-party content posted on their sites if they do not act. In Indonesia - where more than 200 million voters will go to the polls on Feb. 14 - deepfakes of all three presidential candidates and their running mates are circulating online, and have the potential to influence election outcomes, said Nuurrianti ...


dez23

The Presidency of Moldova on Friday denied statements attributed to President Maia Sandu in a new deepfake video made with the help of artificial intelligence, AI.

“In the context of the hybrid war against Moldova and its democratic leadership, the image of the head of state was used by falsifying the image and sound. The purpose of these fake images is to create mistrust and division in society, consequently to weaken the democratic institutions of Moldova,” a press release said.

The video made with the help of artificial intelligence appeared on Friday on the Telegram channel “Sandu Oficial” and was shared by several Telegram channels used to spread fakes, especially to Russian-speaking audiences.
https://balkaninsight.com/2023/12/29/moldova-dismisses-deepfake-video-targeting-president-sandu/
+
https://www.eu-ocs.com/moldova-president-dismisses-russian-deepfake/


dez23

SINGAPORE, Dec 29 — Prime Minister Lee Hsien Loong has urged the public not to respond to deepfake videos of him promising guaranteed returns on investments after one such video emerged on social media platforms in recent days.

In a post on his Facebook page today (December 29) morning, Lee said he was aware of several videos circulating of him “purporting to promote crypto scams”, adding that Deputy Prime Minister Lawrence Wong had also been targeted.

“The scammers use AI (artificial intelligence) technology to mimic our voices and images,” Lee said.
https://www.malaymail.com/news/singapore/2023/12/29/singapore-pm-lee-urges-public-not-to-respond-to-deepfake-ai-videos-featuring-him-or-lawrence-wong-promoting-crypto/109709


dez23

Imran Khan—Pakistan’s Jailed Ex-Leader—Uses AI Deepfake To Address Online Election Rally
Siladitya Ray
Forbes Staff

Dec 18, 2023,07:50am EST
Updated Dec 18, 2023, 07:50am EST

Former Pakistani Prime Minister Imran Khan, who is serving a three-year prison sentence, used AI-generated voice and video in a clip to campaign for his party ahead of the country’s upcoming general election, spotlighting the potential use of AI and deepfakes as major polls are scheduled in the U.S., India, European Union, Russia, Taiwan and beyond in 2024.

Khan’s party Pakistan Tehreek-e-Insaf (PTI) held an online campaign rally featured an AI-generated video of the former leader addressing his supporters and urging them to vote in large numbers for his party.

The video clip, which is about four minutes long, features an AI-generated voice resembling Khan’s delivering the speech and briefly includes an AI-generated deep fake video of the jailed leader sitting in front of a Pakistani flag.

In the video, Khan tells his supporters that his party has been barred from holding public rallies and talks about his party members being targeted, kidnapped and harassed.

PTI social media lead Jibran Ilyas said the content of the video is based on notes provided by Khan from prison, adding that the AI-generated voice feels like a “65-70%” match of the former Prime Minister.
The nearly five-hour livestream of the virtual rally has clocked up more than 1.5 million views on YouTube, although Khan’s AI-generated clip is only around four and half minutes long.

https://www.forbes.com/sites/siladityaray/2023/12/18/imran-khan-pakistans-jailed-ex-leader-uses-ai-deepfake-to-address-online-election-rally/?sh=6e5c48e55903


DEZ23

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.
https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

dez23

Deepfakes for $24 a month — how AI is disrupting Bangladesh’s election

Ahead of South Asian nation going to the polls in January, AI-generated disinformation has become a growing problem
Policymakers around the world are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions ahead of several big elections next year. 
 In one country it is already happening: Bangladesh. 
 The South Asian nation of 170mn people is heading to the polls in early January, a contest marked by a bitter and polarising power struggle between incumbent Prime Minister Sheikh Hasina and her rivals, the opposition Bangladesh Nationalist party. Pro-government news outlets and influencers in Bangladesh have in recent months promoted AI-generated disinformation created with cheap tools offered by artificial intelligence start-ups. In one purported news clip, an AI-generated anchor lambasts the US, a country that Sheikh Hasina’s government has criticised ahead of the polls. A separate deepfake video, which has since been removed, showed an opposition leader equivocating over support for Gazans, a potentially ruinous position in the Muslim-majority country with strong public sympathy for Palestinians. Public pressure is rising on tech platforms to crack down on misleading AI content ahead of several big elections expected in 2024, including in the US, the UK, India and Indonesia. In response, Google and Meta have recently announced policies to start requiring campaigns to disclose if political adverts have been digitally altered. But the examples from Bangladesh show not only how these AI tools can be exploited in elections but also the difficulty in controlling their use in smaller markets that risk being overlooked by American tech companies. Miraj Ahmed Chowdhury, the managing director of Bangladesh-based media research firm Digitally Right, said that while AI-generated disinformation was “still at an experimentation level” — with most created using conventional photo or video editing platforms — it showed how it could take off. “When they have technologies and tools like AI, which allow them to produce misinformation and disinformation at a mass scale, then you can imagine how big that threat is,” he said, adding that “a platform’s attention to a certain jurisdiction depends on how important it is as a market”. Global focus on the ability to use AI to create misleading or false political content has risen over the past year with the proliferation of powerful tools such as OpenAI’s ChatGPT and AI video generators. Earlier this year, the US Republican National Committee released an attack ad using AI-generated images to depict a dystopian future under President Joe Biden. And YouTube suspended several accounts in Venezuela using AI-generated news anchors to promote disinformation favourable to President Nicolás Maduro’s regime. In Bangladesh, the disinformation fuels a tense political climate ahead of polls in early January. Sheikh Hasina has cracked down on the opposition. Thousands of leaders and activists have been arrested in what critics warn amounts to an attempt to rig polls in her favour, prompting the US to publicly pressure her government to ensure free and fair elections. In one video posted on X in September by BD Politico, an online news outlet, a news anchor for “World News” presented a studio segment — interspersed with images of rioting — in which he accused US diplomats of interfering in Bangladeshi elections and blamed them for political violence. The video was made using HeyGen, a Los Angeles-based AI video generator that allows customers to create clips fronted by AI avatars for as little as $24 a month. The same anchor, named “Edward”, can be seen in HeyGen’s promotional content as one of several avatars — which are themselves generated from real actors — available to the platform’s users. X, BD Politico and HeyGen did not respond to requests for comment. Screenshot of an anti-opposition disinformation video posted on Facebook featuring an deepfake of exiled BNP leader Tarique Rahman Other examples include anti-opposition deepfake videos posted on Meta’s Facebook, including one that falsely purports to be of exiled BNP leader Tarique Rahman suggesting the party “keep quiet” about Gaza to not displease the US. The Tech Global Institute, a think-tank, and media non-profit Witness both concluded the fake video was likely AI-generated. AKM Wahiduzzaman, a BNP official, said that his party asked Meta to remove such content but “most of the time they don’t bother to reply”. Meta removed the video after being contacted by the Financial Times for comment. In another deepfake video, created by utilising Tel Aviv-based AI video platform D-ID, the BNP’s youth wing leader Rashed Iqbal Khan is shown lying about his age in what the Tech Global Institute said was an effort to discredit him. D-ID did not respond to a request for comment. A primary challenge in identifying such disinformation was the lack of reliable AI-detection tools, said Sabhanaz Rashid Diya, a Tech Global Institute founder and former Meta executive, with off-the-shelf products particularly ineffective at identifying non-English language content. She added that the solutions proposed by large tech platforms, which have focused on regulating AI in political adverts, will have limited effect in countries such as Bangladesh where ads are a smaller part of political communication. “The solutions that are coming out to address this onslaught of AI misinformation are very western-centric.” Tech platforms “are not taking this as seriously in other parts of the world”. The problem is exacerbated by the lack of regulation or its selective enforcement by authorities. Bangladesh’s Cyber Security Act, for example, has been criticised for giving the government draconian powers to crack down on dissent online. Bangladesh’s internet regulator did not respond to a request for comment about what it is doing to control online misinformation. Recommended The Big Read Will Bangladesh come to regret its dash for gas? A greater threat than the AI-generated content itself, Diya argued, was the prospect that politicians and others could use the mere possibility of deepfakes to discredit uncomfortable information. In neighbouring India, for example, a politician responded to a leaked audio in which he allegedly discussed corruption in his party by alleging that it was fake — a claim subsequently dismissed by fact-checkers. “It’s easy for a politician to say, ‘This is deepfake’, or ‘This is AI-generated’, and sow a sense of confusion,” she said. “The challenge for the global south . . . is going to be how the idea of AI-generated content is being weaponised to erode what people believe to be true versus false.” Additional reporting by Jyotsna Singh in New Delhi

dez23


nov23

Urgent general election warning as sinister deepfakes 'will play a part' in 2024 vote

EXCLUSIVE: The UK's former top civil servant has warned that Britain must be vigilant to the real threat of artificial intelligence powered technologies before next year's election.

https://www.express.co.uk/news/politics/1837015/deepfakes-uk-general-election-2024-simon-mcdonald-spt


nov23

Buenos Aires — In the final weeks of campaigning, Argentine president-elect Javier Milei published a fabricated image depicting his Peronist rival, Sergio Massa, as an old-fashioned communist in military garb, his hand raised aloft in salute.

The apparently AI-generated image drew about 3-million views when Milei posted it on a social media account, highlighting how the rival campaign teams used artificial intelligence (AI) technology to catch voters’ attention in a bid to sway the race.

“There were troubling signs of AI use” in the election, said Darrell West, a senior fellow at the Center for Technology Innovation at the Washington DC-based Brookings Institution.

“Campaigners used AI to deliver deceptive messages to voters, and this is a risk for any election process,” he said.

Right-wing libertarian Milei won Sunday’s run-off with 56% of the vote as he tapped into voter anger with the political mainstream — including Massa’s dominant Peronist party, but both sides turned to AI during the fractious election campaign.

Massa’s team distributed a series of stylised AI-generated images and videos through an unofficial Instagram account named “AI for the Homeland”.

In one, the centre-left economy minister was depicted as a Roman emperor. In others, he was shown as a boxer knocking out a rival, starring on a fake cover of New Yorker magazine and as a soldier in footage from the 1917 war film.

Other AI-generated images set out to undermine and vilify Milei, portraying the wild-haired economist and his team as enraged zombies and pirates.

The use of increasingly accessible AI tech in political campaigning is a global trend, tech and rights specialists say, raising concerns about the potential implications for important upcoming elections in countries including the US, Indonesia and India in 2024.

nov23

Britain’s cybersecurity agency said Tuesday that artificial intelligence poses a threat to the country’s next national election, and cyberattacks by hostile countries and their proxies are proliferating and getting harder to track.

The National Cyber Security Center said “this year has seen the emergence of state-aligned actors as a new cyber threat to critical national infrastructure” such as power, water and internet networks.

The center — part of Britain’s cyberespionage agency, GCHQ — said in its annual review that the past year also has seen “the emergence of a new class of cyber adversary in the form of state-aligned actors, who are often sympathetic to Russia’s further invasion of Ukraine and are ideologically, rather than financially, motivated.”

https://apnews.com/article/uk-cyber-threats-ai-elections-b6482d3127ae524551e15887b3fdb01b




nov23

Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.

The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.

https://edition.cnn.com/2023/11/08/tech/meta-political-ads-ai-deepfakes/index.html



out23

Election Integrity in the Age of Generative AI: Fact vs. Fiction

  • Misinformation and falsehoods about election processes and integrity, spread before and since the 2020 election, have eroded trust in American democracy.
  • In the last year, great attention has been devoted to the ways that powerful new artificial intelligence tools might disrupt the economy and the wider society.
  • If misused, AI could add new dimensions to the challenges election officials already face. This article, the first of two, addresses specific examples. The second will discuss potential remedies.

  • https://www.governing.com/security/election-integrity-in-the-age-of-generative-ai-fact-vs-fiction

  •  out23

    Deepfake technology could be harnessed by hostile states to sway the forthcoming general election, the head of MI5 has warned.

    Ken McCallum said he was concerned that the technology could be used to create ‘all kinds of dissension and chaos in our societies’.

    Speaking at a security summit in California, the Director General warned that artificial technology was also being used to amplify terrorism, disseminate propaganda and teach people how to build bombs

    https://www.dailymail.co.uk/news/article-12645927/Deepfake-technology-UKs-enemies-election-chaos-MI5-chief.html


    out23

    The United Kingdom wants to lead the world on AI safety, but at home it is struggling with its most urgent threat.

    Fears over the proliferation of AI-generated media, known as deepfakes, intensified this weekend as an audio clip appearing to show U.K. opposition leader Keir Starmer swearing at staffers went viral.

    MPs from across the British political spectrum swiftly warned the clip was fake on Sunday. But by Monday afternoon it was still gathering views on X, formerly known as Twitter, and approaching nearly 1.5 million hits.

    https://www.politico.eu/article/uk-keir-starmer-labour-party-deepfake-ai-politics-elections/


    out23

    Just two days before Slovakia’s elections, an audio recording was posted to Facebook. On it were two voices: allegedly, Michal Šimečka, who leads the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper Denník N. They appeared to be discussing how to rig the election, partly by buying votes from the country’s marginalized Roma minority.

    Šimečka and Denník N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta’s manipulated-media policy, which dictates only faked videos—where a person has been edited to say words they never said—go against its rules.

    The election was a tight race between two frontrunners with opposing visions for Slovakia. On Sunday it was announced that the pro-NATO party, Progressive Slovakia, had lost to SMER, which campaigned to withdraw military support for its neighbor, Ukraine.

    https://www.wired.co.uk/article/slovakia-election-deepfakes

    https://www.bloomberg.com/news/newsletters/2023-10-04/deepfakes-in-slovakia-preview-how-ai-will-change-the-face-of-elections