Thursday, November 2, 2023

Israel-Hamas

 nov23

Fake babies, real horror: Deepfakes from the Gaza war increase fears about AI’s power to mislead


WASHINGTON (AP) — Among images of the bombed out homes and ravaged streets of Gaza, some stood out for the utter horror: Bloodied, abandoned infants.

Viewed millions of times online since the war began, these images are deepfakes created using artificial intelligence. If you look closely you can see clues: fingers that curl oddly, or eyes that shimmer with an unnatural light — all telltale signs of digital deception.

The outrage the images were created to provoke, however, is all too real.

https://apnews.com/article/artificial-intelligence-hamas-israel-misinformation-ai-gaza-a1bb303b637ffbbb9cbc3aa1e000db47

out23

Emotive Deepfakes in Israel-Hamas War Further Cloud What’s Real

Bogus images include destroyed buildings, children in rubble

https://www.bloomberg.com/news/articles/2023-10-31/-emotive-deepfakes-in-israeli-war-further-cloud-what-s-real?embedded-checkout=true

Sunday, October 29, 2023

The Blockchain Solution

nov24

Há quinze anos, no contexto da crise financeira de 2008, a criptografia foi revolucionada pelo aparecimento do blockchain, uma tecnologia capaz de registrar informações em bloco por meio de operações em redes descentralizadas. A sua primeira e mais conhecida expressão é o bitcoin, moeda digital que passou rapidamente a ser um dos símbolos da cultura do século XXI. A partir daquele ano, numerosos projetos de inovação começaram a se apresentar com base nas possibilidades abertas pelo blockchain, indicando que as implicações dessa tecnologia para as formas de produção material e simbólica das sociedades modernas serão relevantes no longo prazo. Mas como toda inovação tecnológica em seus primeiros passos, o blockchain tanto provoca a nossa imaginação como suscita muitas dúvidas. 

https://scholar.google.com/scholar_url?url=https://www.academia.edu/download/112768344/Blockchain_e_midia.pdf%23page%3D16&hl=en&sa=X&d=18390474869243544200&ei=mP4xZ-fCG7616rQP-8Dk4A0&scisig=AFWwaeb-mnjCmdr3IMPRnja-Den7&oi=scholaralrt&hist=gzTQwrgAAAAJ:11441452008552841197:AFWwaeazj6lQL8sGufHlpdBUpJ92&html=&pos=0&folt=rel&fols=


mar24

BLOCKCHAIN JUSTICE:

Exploring Decentralising Dispute Resolution Across Borders

Abstract

It is well known that the raison d'etre of Distributed Ledger Technology (DLT) is to enable peer-to-peer transactions that do not require Trusted Third Parties (TTP). Commercial security is a major concern for users in this new era: intermediaries are increasingly seen as security holes and removed from protocols as a result of a growing desire to maintain control over transactions. The need for independence from TTPs has evolved into a counterculture that moves blockchainers away from central authority, the courts and the world as we know it. To date, all existing online dispute resolution (ODR) processes in DLT and related tools such as smart contracts do not reflect the vision of blockchain as a counterculture. They exclusively use adjudicative methods involving one or more TTPs deciding via on-chain incentivised voting systems. This paper aims to discuss why non-adjudicative methods shall have a cultural priority over adjudicative ones, showing why they might be preferred by blockchainers due to risk management and distrust concerns. Furthermore, we introduce a prototype of a non-adjudicative ODR model (“Aspera”) in which users can have total control over the outcome of the dispute in a TTP free environment.

[PDF] Blockchain Justice: Exploring Decentralising Dispute Resolution Across Borders

C Poncibò, A Gangemi, GS Ravot - Journal of Law, Market & Innovation, 2024

mar24

Two months ago, media giant Fox Corp. partnered with Polygon Labs, the team behind the Ethereum-focused layer-2 blockchain, to tackle deepfake distrust.

Fox and Polygon launched Verify, a protocol that aims to protect their IP while letting consumers verify the authenticity of content. And since then, government regulatory committees, publishers and others have seen this as a viable solution to a “today problem,” Melody Hildebrandt, CTO of Fox Corp., said on TechCrunch’s Chain Reaction podcast.

Hildebrandt said she’s bullish that more news outlets, media companies and others will integrate this technology as AI technology goes into the mainstream. It may be beneficial to both AI companies and creators: The models gain knowledge and outlets and individuals can verify their work.

And it’s important for end users, who are uncertain about whether the content they’re consuming is trustworthy or not, Mike Blank, COO at Polygon Labs, said during the episode.

“There’s obviously the beat on the hill,” Hildebrandt said. Although most publishers want to participate in this type of ecosystem, they don’t want to sign away “all their crown jewels,” she added. This means imposing some technical guardrails that allow creators to get ahead, but still being able to maintain some optionality in the future.

https://techcrunch.com/2024/03/14/blockchain-tech-could-be-the-answer-to-uncovering-deepfakes-and-validating-content/?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAALaAZSCTQdMczwORjo-l10C8PDdZoAyTu40zxJKutPUSwtPmcQYQNqSIlJ0B3sDmzRhpG32ApTDAymWpHDw38gpN4XKNnQie8FAabRNOrJC7ZH8GblkeLmKY7EXYnKpjmry8n5pc_7DNxOo1NwAga7RsOPOkzIDuAyceAqEPjV9A


dez23

The crux of the problem is that image-generating tools like DALL-E 2 and Midjourney make it easy for anyone to create realistic-but-fake photos of events that never happened, and similar tools exist for video. While the major generative-AI platforms have protocols to prevent people from creating fake photos or videos of real people, such as politicians, plenty of hackers delight in “jailbreaking” these systems and finding ways around the safety checks. And less-reputable platforms have fewer safeguards.

Against this backdrop, a few big media organizations are making a push to use the C2PA’s content credentials system to allow Internet users to check the manifests that accompany validated images and videos. Images that have been authenticated by the C2PA system can include a little “cr” icon in the corner; users can click on it to see whatever information is available for that image—when and how the image was created, who first published it, what tools they used to alter it, how it was altered, and so on. However, viewers will see that information only if they’re using a social-media platform or application that can read and display content-credential data.

https://spectrum.ieee.org/deepfakes-election


 out23

So, how do we combat this? One word: Blockchain. Yeah, the same technology that’s behind cryptocurrencies like Bitcoin can also be a safeguard against deepfakes.

To understand it more, you can get and read this book. It’s called Blockchain Bubble or Revolution: The Future of Bitcoin, Blockchains, and Cryptocurrencies.

Blockchain can create a secure, immutable record of digital assets, including videos and photos. If you’ve got a personal community on social media, you can use blockchain to verify the authenticity of content.

When a video is uploaded, it can be timestamped and recorded on a blockchain. If someone tries to pass off a deepfake as real, the blockchain record can be checked to confirm its legitimacy.

https://nicole-ven.medium.com/how-to-outsmart-deepfakes-the-one-solution-that-actually-works-2548eefa5455

Wednesday, October 4, 2023

eleições continuação

jul24
Olaf Scholz Deepfake: How a Deepfake impacts Public Trust
Abstract  In the digital age, deepfake technology significantly threatens the integrity of information and democratic discourse. This thesis examines the socio-political effects of deepfakes, focusing on cognitive and emotional responses and the broader implications for public trust in media and political institutions. Through the "AfDBan" deepfake video, this thesis provides a case-specific understanding of deepfakes' influence on public perceptions and trust. Utilizing an interpretive approach, experimental semi-structured interviews capture initial perceptions of the video's authenticity, responses upon learning the video's deepfake nature, and reflections on broader implications for public trust. Findings reveal that initial perceptions were influenced by digital literacy, pre-existing biases, and public trust itself. Higher digital literacy and critical thinking facilitated recognition of the deepfake, while lower digital literacy or stronger political biases increased the vulnerability to deception. Emotional responses ranged from indifference to fear. Deepfakes significantly impact public trust and democratic discourse, with concerns about the erosion of trust in media and political institutions, the amplification of misinformation, and the manipulation of public opinion. Multiple solutions are imaginable: enhancing media literacy, implementing technological safeguards, developing regulatory measures for transparency and accountability, promoting ethical standards in digital media, and fostering inclusive public discourse to counteract echo chambers and polarization.

NOV23

Are AI deepfakes a threat to elections?



jun24
Examining the Implications of Deepfakes for Election Integrity

Hriday Ranka1*, Mokshit Surana1*, Neel Kothari1*, Veer Pariawala1*, Pratyay Banerjee1*, Aditya
Surve1*, Sainath Reddy Sankepally1*, Raghav Jain1, Jhagrut Lalwani1, Swapneel Mehta1
1SimPPL mokshitsurana3110@gmail.com
Abstract
It is becoming cheaper to launch disinformation operations
at scale using AI-generated content, in particular ’deepfake’
technology.We have observed instances of deepfakes in political
campaigns, where generated content is employed to both
bolster the credibility of certain narratives (reinforcing outcomes)
and manipulate public perception to the detriment of
targeted candidates or causes (adversarial outcomes). We discuss
the threats from deepfakes in politics, highlight model
specifications underlying different types of deepfake generation
methods, and contribute an accessible evaluation of the
efficacy of existing detection methods. We provide this as a
summary for lawmakers and civil society actors to understand
how the technology may be applied in light of existing policies
regulating its use. We highlight the limitations of existing
detection mechanisms and discuss the areas where policies
and regulations are required to address the challenges of
deepfakes.
(no arquivo)

jun24

Report warns that UK is heading towards its ‘first deepfake election’

With voters heading to the polls in just a matter of weeks, a Centre for Policy Studies report warns that with deepfake content already spreading rapidly, this could become the UK’s first deepfake election.

You may have come across some suspicious videos on social media platforms recently. Perhaps you have seen a satirical TikTok video showing Rishi Sunak saying that he could not care less “about energy bills being over £3,000”? Or maybe a video of the shadow health secretary Wes Streeting calling his Labour colleague Diane Abbott a “silly woman” on Politics Live?

A report from the Centre for Policy Studies (CPS) called Facing Fakes warns that given the current general election campaign, we should be particularly wary of deepfakes aimed at spreading election misinformation and disinformation.

https://eandt.theiet.org/2024/06/14/can-you-spot-fake-new-report-warns-we-are-heading-towards-uks-first-deepfake-election



jun24

The Deepfake Crisis That Didn’t Happen

India’s election is an eye-opening lesson for the U.S. and other countries.
https://www.theatlantic.com/newsletters/archive/2024/06/the-deepfake-crisis-that-didnt-happen/678632/

apr24

The Democratic Challenges of Deepfake Technology

https://consoc.org.uk/democratic-challenges-of-deepfake-tech/


mar24 GB

MPs are freaking out that artificial intelligence-powered deepfakes could influence Britain’s upcoming general election. They’re probably right.

Fears have only grown in recent months as Prime Minister Rishi Sunak, Labour Leader Keir Starmer and London Mayor Sadiq Khan have seen their identities spoofed in fake video or audio clips.

“It is one of those ‘the future is here’ moments,” former Justice Secretary Robert Buckland told POLITICO, adding that a “combination” of responses will be needed.
https://www.politico.eu/article/united-kingdom-deepfakes-election-rishi-sunak-keir-starmer-sadiq-khan/

fve24 Paquistão
One day before Pakistan's general election, deepfakes targeting Imran Khan's PTI started circulating on social media, where Khan and other PTI members allegedly call for an election boycott. We tell you more in this edition of Truth or Fake.
https://www.youtube.com/watch?v=j-byJ_pGknA

fev24

Days before a pivotal election in Slovakia to determine who would lead the country, a damning audio recording spread online in which one of the top candidates seemingly boasted about how he’d rigged the election.

And if that wasn’t bad enough, his voice could be heard on another recording talking about raising the cost of beer.

The recordings immediately went viral on social media, and the candidate, who is pro-NATO and aligned with Western interests, was defeated in September by an opponent who supported closer ties to Moscow and Russian President Vladimir Putin.

While the number of votes swayed by the leaked audio remains uncertain, two things are now abundantly clear: The recordings were fake, created using artificial intelligence; and US officials see the episode in Europe as a frightening harbinger of the sort of interference the United States will likely experience during the 2024 presidential election.
https://edition.cnn.com/2024/02/01/politics/election-deepfake-threats-invs/index.html

jan24

Rise of election deepfake risks

CNBC’s Julia Boorstin reports on the risks around deepfakes
https://www.cnbc.com/video/2024/01/18/rise-of-election-deepfake-risks.html

jan24
Richie Sunak
https://www.wionews.com/videos/uk-fake-rishi-sunak-ads-spark-concerns-facebook-flooded-with-deepfake-ads-of-uk-pm-680012

jan24
With elections due in India, Indonesia, Bangladesh and Pakistan in the coming weeks, misinformation is rife on social media platforms, with deepfakes - video or audio made using AI and broadcast as authentic - being particularly concerning, say tech ... In India, where more than 900 million people are eligible to vote, Modi has said deepfake videos are a "big concern", and authorities have warned social media platforms they could lose their safe-harbour status that protects them from liability for third-party content posted on their sites if they do not act. In Indonesia - where more than 200 million voters will go to the polls on Feb. 14 - deepfakes of all three presidential candidates and their running mates are circulating online, and have the potential to influence election outcomes, said Nuurrianti ...


dez23

The Presidency of Moldova on Friday denied statements attributed to President Maia Sandu in a new deepfake video made with the help of artificial intelligence, AI.

“In the context of the hybrid war against Moldova and its democratic leadership, the image of the head of state was used by falsifying the image and sound. The purpose of these fake images is to create mistrust and division in society, consequently to weaken the democratic institutions of Moldova,” a press release said.

The video made with the help of artificial intelligence appeared on Friday on the Telegram channel “Sandu Oficial” and was shared by several Telegram channels used to spread fakes, especially to Russian-speaking audiences.
https://balkaninsight.com/2023/12/29/moldova-dismisses-deepfake-video-targeting-president-sandu/
+
https://www.eu-ocs.com/moldova-president-dismisses-russian-deepfake/


dez23

SINGAPORE, Dec 29 — Prime Minister Lee Hsien Loong has urged the public not to respond to deepfake videos of him promising guaranteed returns on investments after one such video emerged on social media platforms in recent days.

In a post on his Facebook page today (December 29) morning, Lee said he was aware of several videos circulating of him “purporting to promote crypto scams”, adding that Deputy Prime Minister Lawrence Wong had also been targeted.

“The scammers use AI (artificial intelligence) technology to mimic our voices and images,” Lee said.
https://www.malaymail.com/news/singapore/2023/12/29/singapore-pm-lee-urges-public-not-to-respond-to-deepfake-ai-videos-featuring-him-or-lawrence-wong-promoting-crypto/109709


dez23

Imran Khan—Pakistan’s Jailed Ex-Leader—Uses AI Deepfake To Address Online Election Rally
Siladitya Ray
Forbes Staff

Dec 18, 2023,07:50am EST
Updated Dec 18, 2023, 07:50am EST

Former Pakistani Prime Minister Imran Khan, who is serving a three-year prison sentence, used AI-generated voice and video in a clip to campaign for his party ahead of the country’s upcoming general election, spotlighting the potential use of AI and deepfakes as major polls are scheduled in the U.S., India, European Union, Russia, Taiwan and beyond in 2024.

Khan’s party Pakistan Tehreek-e-Insaf (PTI) held an online campaign rally featured an AI-generated video of the former leader addressing his supporters and urging them to vote in large numbers for his party.

The video clip, which is about four minutes long, features an AI-generated voice resembling Khan’s delivering the speech and briefly includes an AI-generated deep fake video of the jailed leader sitting in front of a Pakistani flag.

In the video, Khan tells his supporters that his party has been barred from holding public rallies and talks about his party members being targeted, kidnapped and harassed.

PTI social media lead Jibran Ilyas said the content of the video is based on notes provided by Khan from prison, adding that the AI-generated voice feels like a “65-70%” match of the former Prime Minister.
The nearly five-hour livestream of the virtual rally has clocked up more than 1.5 million views on YouTube, although Khan’s AI-generated clip is only around four and half minutes long.

https://www.forbes.com/sites/siladityaray/2023/12/18/imran-khan-pakistans-jailed-ex-leader-uses-ai-deepfake-to-address-online-election-rally/?sh=6e5c48e55903


DEZ23

As two billion people around the world go to voting stations next year in fifty countries, there is a crucial question: how can we build resilience into our democracy in an era of audiovisual manipulation? When AI can blur the lines between reality and fiction with increasing credibility and ease, discerning truth from falsehood becomes not just a technological battle but a fight to uphold democracy.

From conversations with journalists, activists, technologists and other communities impacted by generative AI and deepfakes, I have learnt that the effects of synthetic media on democracy are a mix of new, old, and borrowed challenges.
https://www.cfr.org/blog/protect-democracy-deepfake-era-we-need-bring-voices-those-defending-it-frontlines

dez23

Deepfakes for $24 a month — how AI is disrupting Bangladesh’s election

Ahead of South Asian nation going to the polls in January, AI-generated disinformation has become a growing problem
Policymakers around the world are worrying over how AI-generated disinformation can be harnessed to try to mislead voters and inflame divisions ahead of several big elections next year. 
 In one country it is already happening: Bangladesh. 
 The South Asian nation of 170mn people is heading to the polls in early January, a contest marked by a bitter and polarising power struggle between incumbent Prime Minister Sheikh Hasina and her rivals, the opposition Bangladesh Nationalist party. Pro-government news outlets and influencers in Bangladesh have in recent months promoted AI-generated disinformation created with cheap tools offered by artificial intelligence start-ups. In one purported news clip, an AI-generated anchor lambasts the US, a country that Sheikh Hasina’s government has criticised ahead of the polls. A separate deepfake video, which has since been removed, showed an opposition leader equivocating over support for Gazans, a potentially ruinous position in the Muslim-majority country with strong public sympathy for Palestinians. Public pressure is rising on tech platforms to crack down on misleading AI content ahead of several big elections expected in 2024, including in the US, the UK, India and Indonesia. In response, Google and Meta have recently announced policies to start requiring campaigns to disclose if political adverts have been digitally altered. But the examples from Bangladesh show not only how these AI tools can be exploited in elections but also the difficulty in controlling their use in smaller markets that risk being overlooked by American tech companies. Miraj Ahmed Chowdhury, the managing director of Bangladesh-based media research firm Digitally Right, said that while AI-generated disinformation was “still at an experimentation level” — with most created using conventional photo or video editing platforms — it showed how it could take off. “When they have technologies and tools like AI, which allow them to produce misinformation and disinformation at a mass scale, then you can imagine how big that threat is,” he said, adding that “a platform’s attention to a certain jurisdiction depends on how important it is as a market”. Global focus on the ability to use AI to create misleading or false political content has risen over the past year with the proliferation of powerful tools such as OpenAI’s ChatGPT and AI video generators. Earlier this year, the US Republican National Committee released an attack ad using AI-generated images to depict a dystopian future under President Joe Biden. And YouTube suspended several accounts in Venezuela using AI-generated news anchors to promote disinformation favourable to President Nicolás Maduro’s regime. In Bangladesh, the disinformation fuels a tense political climate ahead of polls in early January. Sheikh Hasina has cracked down on the opposition. Thousands of leaders and activists have been arrested in what critics warn amounts to an attempt to rig polls in her favour, prompting the US to publicly pressure her government to ensure free and fair elections. In one video posted on X in September by BD Politico, an online news outlet, a news anchor for “World News” presented a studio segment — interspersed with images of rioting — in which he accused US diplomats of interfering in Bangladeshi elections and blamed them for political violence. The video was made using HeyGen, a Los Angeles-based AI video generator that allows customers to create clips fronted by AI avatars for as little as $24 a month. The same anchor, named “Edward”, can be seen in HeyGen’s promotional content as one of several avatars — which are themselves generated from real actors — available to the platform’s users. X, BD Politico and HeyGen did not respond to requests for comment. Screenshot of an anti-opposition disinformation video posted on Facebook featuring an deepfake of exiled BNP leader Tarique Rahman Other examples include anti-opposition deepfake videos posted on Meta’s Facebook, including one that falsely purports to be of exiled BNP leader Tarique Rahman suggesting the party “keep quiet” about Gaza to not displease the US. The Tech Global Institute, a think-tank, and media non-profit Witness both concluded the fake video was likely AI-generated. AKM Wahiduzzaman, a BNP official, said that his party asked Meta to remove such content but “most of the time they don’t bother to reply”. Meta removed the video after being contacted by the Financial Times for comment. In another deepfake video, created by utilising Tel Aviv-based AI video platform D-ID, the BNP’s youth wing leader Rashed Iqbal Khan is shown lying about his age in what the Tech Global Institute said was an effort to discredit him. D-ID did not respond to a request for comment. A primary challenge in identifying such disinformation was the lack of reliable AI-detection tools, said Sabhanaz Rashid Diya, a Tech Global Institute founder and former Meta executive, with off-the-shelf products particularly ineffective at identifying non-English language content. She added that the solutions proposed by large tech platforms, which have focused on regulating AI in political adverts, will have limited effect in countries such as Bangladesh where ads are a smaller part of political communication. “The solutions that are coming out to address this onslaught of AI misinformation are very western-centric.” Tech platforms “are not taking this as seriously in other parts of the world”. The problem is exacerbated by the lack of regulation or its selective enforcement by authorities. Bangladesh’s Cyber Security Act, for example, has been criticised for giving the government draconian powers to crack down on dissent online. Bangladesh’s internet regulator did not respond to a request for comment about what it is doing to control online misinformation. Recommended The Big Read Will Bangladesh come to regret its dash for gas? A greater threat than the AI-generated content itself, Diya argued, was the prospect that politicians and others could use the mere possibility of deepfakes to discredit uncomfortable information. In neighbouring India, for example, a politician responded to a leaked audio in which he allegedly discussed corruption in his party by alleging that it was fake — a claim subsequently dismissed by fact-checkers. “It’s easy for a politician to say, ‘This is deepfake’, or ‘This is AI-generated’, and sow a sense of confusion,” she said. “The challenge for the global south . . . is going to be how the idea of AI-generated content is being weaponised to erode what people believe to be true versus false.” Additional reporting by Jyotsna Singh in New Delhi

dez23


nov23

Urgent general election warning as sinister deepfakes 'will play a part' in 2024 vote

EXCLUSIVE: The UK's former top civil servant has warned that Britain must be vigilant to the real threat of artificial intelligence powered technologies before next year's election.

https://www.express.co.uk/news/politics/1837015/deepfakes-uk-general-election-2024-simon-mcdonald-spt


nov23

Buenos Aires — In the final weeks of campaigning, Argentine president-elect Javier Milei published a fabricated image depicting his Peronist rival, Sergio Massa, as an old-fashioned communist in military garb, his hand raised aloft in salute.

The apparently AI-generated image drew about 3-million views when Milei posted it on a social media account, highlighting how the rival campaign teams used artificial intelligence (AI) technology to catch voters’ attention in a bid to sway the race.

“There were troubling signs of AI use” in the election, said Darrell West, a senior fellow at the Center for Technology Innovation at the Washington DC-based Brookings Institution.

“Campaigners used AI to deliver deceptive messages to voters, and this is a risk for any election process,” he said.

Right-wing libertarian Milei won Sunday’s run-off with 56% of the vote as he tapped into voter anger with the political mainstream — including Massa’s dominant Peronist party, but both sides turned to AI during the fractious election campaign.

Massa’s team distributed a series of stylised AI-generated images and videos through an unofficial Instagram account named “AI for the Homeland”.

In one, the centre-left economy minister was depicted as a Roman emperor. In others, he was shown as a boxer knocking out a rival, starring on a fake cover of New Yorker magazine and as a soldier in footage from the 1917 war film.

Other AI-generated images set out to undermine and vilify Milei, portraying the wild-haired economist and his team as enraged zombies and pirates.

The use of increasingly accessible AI tech in political campaigning is a global trend, tech and rights specialists say, raising concerns about the potential implications for important upcoming elections in countries including the US, Indonesia and India in 2024.

nov23

Britain’s cybersecurity agency said Tuesday that artificial intelligence poses a threat to the country’s next national election, and cyberattacks by hostile countries and their proxies are proliferating and getting harder to track.

The National Cyber Security Center said “this year has seen the emergence of state-aligned actors as a new cyber threat to critical national infrastructure” such as power, water and internet networks.

The center — part of Britain’s cyberespionage agency, GCHQ — said in its annual review that the past year also has seen “the emergence of a new class of cyber adversary in the form of state-aligned actors, who are often sympathetic to Russia’s further invasion of Ukraine and are ideologically, rather than financially, motivated.”

https://apnews.com/article/uk-cyber-threats-ai-elections-b6482d3127ae524551e15887b3fdb01b




nov23

Meta, parent company of Instagram and Facebook, will require political advertisers around the world to disclose any use of artificial intelligence in their ads, starting next year, the company said Wednesday, as part of a broader move to limit so-called “deepfakes” and other digitally altered misleading content.

The rule is set to take effect next year, the company added, ahead of the 2024 US election and other future elections worldwide.

https://edition.cnn.com/2023/11/08/tech/meta-political-ads-ai-deepfakes/index.html



out23

Election Integrity in the Age of Generative AI: Fact vs. Fiction

  • Misinformation and falsehoods about election processes and integrity, spread before and since the 2020 election, have eroded trust in American democracy.
  • In the last year, great attention has been devoted to the ways that powerful new artificial intelligence tools might disrupt the economy and the wider society.
  • If misused, AI could add new dimensions to the challenges election officials already face. This article, the first of two, addresses specific examples. The second will discuss potential remedies.

  • https://www.governing.com/security/election-integrity-in-the-age-of-generative-ai-fact-vs-fiction

  •  out23

    Deepfake technology could be harnessed by hostile states to sway the forthcoming general election, the head of MI5 has warned.

    Ken McCallum said he was concerned that the technology could be used to create ‘all kinds of dissension and chaos in our societies’.

    Speaking at a security summit in California, the Director General warned that artificial technology was also being used to amplify terrorism, disseminate propaganda and teach people how to build bombs

    https://www.dailymail.co.uk/news/article-12645927/Deepfake-technology-UKs-enemies-election-chaos-MI5-chief.html


    out23

    The United Kingdom wants to lead the world on AI safety, but at home it is struggling with its most urgent threat.

    Fears over the proliferation of AI-generated media, known as deepfakes, intensified this weekend as an audio clip appearing to show U.K. opposition leader Keir Starmer swearing at staffers went viral.

    MPs from across the British political spectrum swiftly warned the clip was fake on Sunday. But by Monday afternoon it was still gathering views on X, formerly known as Twitter, and approaching nearly 1.5 million hits.

    https://www.politico.eu/article/uk-keir-starmer-labour-party-deepfake-ai-politics-elections/


    out23

    Just two days before Slovakia’s elections, an audio recording was posted to Facebook. On it were two voices: allegedly, Michal Šimečka, who leads the liberal Progressive Slovakia party, and Monika Tódová from the daily newspaper Denník N. They appeared to be discussing how to rig the election, partly by buying votes from the country’s marginalized Roma minority.

    Šimečka and Denník N immediately denounced the audio as fake. The fact-checking department of news agency AFP said the audio showed signs of being manipulated using AI. But the recording was posted during a 48-hour moratorium ahead of the polls opening, during which media outlets and politicians are supposed to stay silent. That meant, under Slovakia’s election rules, the post was difficult to widely debunk. And, because the post was audio, it exploited a loophole in Meta’s manipulated-media policy, which dictates only faked videos—where a person has been edited to say words they never said—go against its rules.

    The election was a tight race between two frontrunners with opposing visions for Slovakia. On Sunday it was announced that the pro-NATO party, Progressive Slovakia, had lost to SMER, which campaigned to withdraw military support for its neighbor, Ukraine.

    https://www.wired.co.uk/article/slovakia-election-deepfakes

    https://www.bloomberg.com/news/newsletters/2023-10-04/deepfakes-in-slovakia-preview-how-ai-will-change-the-face-of-elections 

    Wednesday, September 13, 2023

    ATUALIZAÇÃO DO TRABALHO sete23

    dec24

    Deepfakes Barely Impacted 2024 Elections Because They Aren’t Very Good, Research Finds

    AI is abundant, but people are good at recognizing when an image has been created using the technology.
    https://gizmodo.com/deepfakes-had-little-impact-on-2024-elections-because-they-arent-very-good-research-finds-2000543717

    dec24

    . We argue that the current definition of deep fakes in the AI act and the corresponding obligations are not sufficiently specif ied to tackle the challenges posed by deep fakes. By analyzing the life cycle of a digital photo from the camera sensor to the digital editing features, we find that: (1.) Deep fakes are ill-defined in the EU AI Act. The definition leaves too much scope for what a deep fake is. (2.) It is unclear how editing functions like Google’s “best take” feature can be considered as an exception to transparency obligations. (3.) The exception for substantially edited images raises questions about what constitutes substantial editing of content and whether or not this editing must be perceptible by a natural person.



    dec24

    A bipartisan bill to combat the spread of AI-generated, or deepfake, revenge pornography online unanimously passed the Senate Tuesday.

    The TAKE IT DOWN Act, co-authored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minn., would make it unlawful to knowingly publish non-consensual intimate imagery, including deepfake imagery, that depict nude and explicit content in interstate commerce.

    https://broadbandbreakfast.com/bill-targeting-deepfake-porn-unanimously-passes-senate/


    dec24

    Georgia lawmakers are pondering a future in which AI bots must disclose their non-human status and deepfakes that sow confusion come with severe criminal penalties.

    Why it matters: The Georgia Senate Study Committee on Artificial Intelligence's report, released Tuesday, offers glimpses into lawmakers' views on the tech and how much (or little) they plan to regulate the coming wave of bots.

    Context: For the past seven months, the committee has heard testimony from academics, business leaders and policy experts about AI's effect on industries such as agriculture, entertainment, government and transparency.

    The big picture: The study committee stops short of proposing specific legislation, but generally recommends that future laws would "support AI regulation without stifling innovation."

    https://www.axios.com/local/atlanta/2024/12/04/georgia-ai-study-committee-deepfakes




    out24

    Keywords

    First Amendment; constitutional law; artificial intelligence; deepfakes; political campaigns; political advertisements; A.I.

    Abstract

    In recent years, artificial intelligence (AI) technology has developed rapidly. Accompanying this advancement in sophistication and accessibility are various societal benefits and risks. For example, political campaigns and political action committees have begun to use AI in advertisements to generate deepfakes of opposing candidates to influence voters. Deepfakes of political candidates interfere with voters’ ability to discern falsity from reality and make informed decisions at the ballot box. As a result, these deepfakes pose a threat to the integrity of elections and the existence of democracy. Despite the dangers of deepfakes, regulating false political speech raises significant First Amendment questions.

    This Note considers whether the Protect Elections from Deceptive AI Act, a proposed federal ban of AI-generated deepfakes portraying federal candidates in political advertisements, is constitutional. This Note concludes that the bill is constitutional under the First Amendment and that less speech restrictive alternatives fail to address the risks of deepfakes. Finally, this Note suggests revisions to narrow the bill’s application and ensure its apolitical enforcement.

    https://ir.lawnet.fordham.edu/flr/vol93/iss1/7/



    set24
    2024 Legis. Info. Bull. 38 (2024)
    Deepfakes and Artificial Intelligence: A New Legal Challenge at European and National Levels?

    https://heinonline.org/HOL/LandingPage?handle=hein.journals/lgveifn2024&div=16&id=&page=


    set24

    Half of U.S. states seek to crack down on AI in elections

    As the 2024 election cycle ramps up, at least 26 states have passed or are considering bills 
    regulating the use of generative AI in election-related communications, a new analysis by Axios shows.

    Why it matters: The review lays bare a messy patchwork of rules around the use of genAI in politics, 
    as experts increasingly sound the alarm on the evolving technology's power 
    https://www.axios.com/2024/09/22/ai-regulation-election-laws-map


    Sep 17, 2024

    Governor Newsom signs bills to combat deepfake election content

    https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-combat-deepfake-election-content/


    set24

    SACRAMENTO, Calif. (AP) — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

    The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom’s desk. Their deadline is Saturday.

    The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation.

    https://apnews.com/article/california-ai-election-deepfakes-safety-regulations-eb6bbc80e346744dbb250f931ebca9f3

    ago24

    Is a State AI Patchwork Next? AI Legislation at a State Level in 2024

    19 Aug 2024

    While Congress debates what, if any, actions are needed around artificial intelligence (AI), many states have passed or considered their own legislation. This did not start in 2024, but it certainly accelerated, with at least 40 states considering AI legislation. Such a trend is not unique to AI, but certain actions at a state level could be particularly disruptive to the development of this technology. In some cases, states could also show the many beneficial applications of the technology, well beyond popular services such as ChatGPT. An Overview of AI Legislation at a State Level in 2024 As of August 2024, 31 states have passed some form of AI legislation. However, what AI legislation seeks to regulate varies widely among the states. For example, at least 22 have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts. States are also exploring how the government can use AI. Concerningly, Colorado has passed a significant regulatory regime for many aspects of AI, while California continues to consider such a regime.
    https://policycommons.net/artifacts/15470461/is-a-state-ai-patchwork-next-ai-legislation-at-a-state-level-in-2024/16363852/




    ago24 (FEDERAL)

    Legislation is finally starting to catch up to AI, with a new law allowing victims of non-consensual deepfake pornography to sue those responsible passing the US senate in unanimous fashion.

    Deepfake technology has gotten a lot better since the boom in AI over the last few years. While some instances are fun and harmless, others have proven to be quite a problem, imitating celebrities to scam users or putting them in problematic situations.

    However, this new law could be a stepping stone to more AI and deepfake regulation, and all we can say is: it’s about time.
    The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act is a piece of legislature in the US currently on the way to becoming a law. It states that, in the event of non-consensual deepfake pornography, the victim is able to sue the party responsible.
    https://tech.co/news/anti-deepfake-law-passes-us-senate

    agot24
    European Union Law Working Papers No. 95 The EU Regulatory Framework for Artificial Intelligence Karina Issina

    This thesis presents аn аnаlysis of the Europeаn Union's regulаtory frаmework for аrtificiаl intelligence. Аs АI technologies become increаsingly integrаl to vаrious sectors, the necessity for а cleаr regulаtory structure to аddress the аssociаted ethicаl, legаl, аnd societаl chаllenges is becoming essentiаl. The Europeаn Union hаs positioned itself аs а globаl leаder in АI governаnce, аiming to creаte а bаlаnced environment thаt fosters innovаtion while sаfeguаrding fundаmentаl rights аnd ethicаl stаndаrds. The thesis exаmines the evolution of АI regulаtion within the EU, highlighting legislаtive instruments such аs the Generаl Dаtа Protection Regulаtion аnd the recently аdopted Аrtificiаl Intelligence Аct. It explores the core principles underpinning the EU's regulаtory аpproаch, including trаnspаrency, аccountаbility, humаn-centricity, аnd risk-bаsed regulаtion, аnd how these principles аre integrаted in legislаtive meаsures. А thorough аnаlysis of the regulаtory texts аnd relаted policy documents is conducted to explore the EU's strаtegic objectives аnd regulаtory mechаnisms. The reseаrch identifies аnd discusses key regulаtory themes, such аs dаtа protection, аlgorithmic trаnspаrency, biаs mitigаtion, аnd the delineаtion of high-risk АI аpplicаtions. Аdditionаlly, it investigаtes the implicаtions of these regulаtions for АI development аnd deployment within the EU. Compаrаtive аnаlysis with non-EU regulаtory frаmeworks is аlso incorporаted to contextuаlize the EU's аpproаch within the globаl АI governаnce ecosystem. Findings suggest thаt the EU's regulаtory frаmework for АI is both comprehensive аnd pioneering, setting а high stаndаrd for ethicаl АI governаnce. However, the dynаmic nаture of АI technology necessitаtes ongoing regulаtory аdаptаtion аnd refinement. The study concludes with recommendаtions for enhаncing the regulаtory frаmework to ensure it remаins responsive to technologicаl аdvаncements аnd continues to uphold the EU's commitment to ethicаl аnd responsible АI. 
    https://law.stanford.edu/wp-content/uploads/2024/07/EU-Law-WP-95-Issina.pdf

    ago24

    Senate Majority Leader Chuck Schumer (D-N.Y.) said in a recent interview he will continue to push for the regulation of artificial intelligence (AI) in elections.

    “Look, deepfakes are a serious, serious threat to this democracy. If people can no longer believe that the person they’re hearing speak is actually the person, this democracy has suffered — it will suffer — in ways that we have never seen before,” Schumer said last week in an interview with NBC News. “And if people just get turned off to democracy, Lord knows what will happen.”

    With fewer than 100 days left until the November election, Schumer spoke to the outlet about the impact AI could have on the election process. He said he hopes to bring more legislation about AI to the Senate floor in the coming months.

    His recent push for regulation of the emerging technology in elections comes after billionaire Elon Musk a shared a fake video of Vice President Harris using AI to mimic her voice and spew insults about her campaign and President Biden — who dropped out of the 2024 race last month and subsequently endorsed the vice president.

    https://thehill.com/policy/technology/4813186-chuck-schumer-artificial-intelligence-regulation-congress-2024-election/

    agot24

    States are rapidly adopting laws to grapple with political deepfakes in lieu of comprehensive federal regulation of manipulated media related to elections, according to a new report from the Brennan Center for Justice.

    Nineteen states passed laws regulating deepfakes in elections, and 26 others considered related bills. But an NBC News review of the laws and a new analysis from the Brennan Center, a nonpartisan law and policy institute affiliated with New York University School of Law, finds that most states’ deepfake laws are so broad that they would face tough court challenges, while a few are so narrow that they leave plenty of options for bad actors to use the technology to deceive voters.

    “It’s actually quite incredible how many of these laws have passed,” said Larry Norden, vice president of the Brennan Center’s Elections and Government Program and the author of the analysis released Tuesday.

    The study found that states introduced 151 different bills this year that addressed deepfakes and other deceptive media meant to fool voters, about a quarter of all state AI laws introduced.

    “That’s not something you generally see, and I think it is a reflection of how quickly this technology has evolved and how concerned legislators are that it could impact political campaigns,” he said.


    https://www.nbcnews.com/tech/tech-news/states-are-rapidly-adopting-laws-regulating-political-deepfakes-rcna164578

    jul24

    A bipartisan group of U.S. senators has introduced legislation intended to counter the rise of deepfakes and protect creators from theft through generative artificial intelligence.

    "Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those

    The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, is co-sponsored by Blackburn, Maria Cantwell (D-Wash.), and Martin Heinrich (D-N.M.), who is also a member of the Senate AI Working Group.
    https://seekingalpha.com/news/4124026-us-senators-introduce-bipartisan-bill-to-counter-gen-ai-deepfakes

    jun24

    Federal legislation to combat deepfakes

    Currently, there is no comprehensive enacted federal legislation in the United States that bans or even regulates deepfakes. However, the Identifying Outputs of Generative Adversarial Networks Act requires the director of the National Science Foundation to support research for the development and measurement of standards needed to generate GAN outputs and any other comparable techniques developed in the future.

    Congress is considering additional legislation that, if passed, would regulate the creation, disclosure, and dissemination of deepfakes. Some of this legislation includes the Deepfake Report Act of 2019, which requires the Science and Technology directorate in the U.S. Department of Homeland Security to report at specified intervals on the state of digital content forgery technology; the DEEPFAKES Accountability Act, which aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes; the DEFIANCE Act of 2024, which would improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries and for other purposes; and the Protecting Consumers from Deceptive AI Act, which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by GenAI, to ensure that audio or visual content created or substantially modified by GenAI includes a disclosure acknowledging the GenAI origin of such content, and for other purposes.

    States pursue deepfake legislation

    In addition, several states have enacted legislation to regulate deepfakes, including:

    https://www.thomsonreuters.com/en-us/posts/government/deepfakes-federal-state-regulation/


    jun24

    MIDDLETON, Wis.June 27, 2024 /PRNewswire/ -- Deepfakes, an offshoot of Artificial Intelligence (AI), have become a pressing social and political issue that an increasing number of state lawmakers are trying to address through legislation. The number of bills in this space has grown from an average of 28 per year from 2019-2023, to 294 bills introduced to date in 2024.

    That's why Ballotpedia, the nation's premiere source for unbiased information on elections, politics, and policy has created and launched a comprehensive AI Deepfake Legislation Tracker and Ballotpedia's State of Deepfake Legislation 2024 Annual Report, available here.

    The goal of Ballotpedia's newest tracker is simple: to let people know what's happening—in real time—with deepfake legislation in all 50 states. The tracker provides historical context on deepfake legislation going back to 2019 and covers these topics:

    https://www.prnewswire.com/news-releases/deepfake-related-bills-have-increased-950-over-the-previous-five-year-average-302183982.html


    jun24

    New legislation in Michigan would penalize people for using technology to create and distribute deepfake pornography. 

    "It happens a lot to younger women, girls, students as a kind of bullying technique," said state Rep. Penelope Tsernoglou. "It causes a lot of mental distress; in some cases, financial issues, reputational harm, and even some really severe cases could lead to self-harm and suicide."

    In an instance of bipartisanship, the package of bills passed the House earlier this week with a wide margin of 108-2. The bills would make it illegal to create and distribute digitally altered pictures or videos that falsely show sexual activity. 
    https://www.cbsnews.com/detroit/news/deepfake-legislation-passes-michigan-house-by-wide-margin/

    apr24
    On the way to deep fake democracy? Deep fakes in election campaigns in 2023Research
    Open access
    Published: 26 April 2024
    https://link.springer.com/article/10.1057/s41304-024-00482-9

    apr24

    Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.

    Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.

    Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.

    Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.

    https://www.wired.com/story/the-biggest-deepfake-porn-website-is-now-blocked-in-the-uk/


    ap24

    IndianaTexas and Virginia in the past few years have enacted broad laws with penalties of up to a year in jail plus fines for anyone found guilty of sharing deepfake pornography. In Hawaii, the punishment is up to five years in prison.

    Many states are combatting deepfake porn by adding to existing laws. Several, including Indiana, New York and Virginia, have enacted laws that add deepfakes to existing prohibitions on so-called revenge porn, or the posting of sexual images of a former partner without their consent. Georgia and Hawaii have targeted deepfake porn by updating their privacy laws.

    Other states, such as FloridaSouth Dakota and Washington, have enacted laws that update the definition of child pornography to include deepfakes. Washington’s law, which was signed by Democratic Gov. Jay Inslee in March, makes it illegal to be in possession of a “fabricated depiction of an identifiable minor” engaging in a sexually explicit act — a crime punishable by up to a year in jail.

    https://missouriindependent.com/2024/04/16/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/



    APRReino Unido

    The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.

    Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.

    It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.

    And if the image is then shared more widely, they could face jail.

    A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.

    https://www.bbc.com/news/uk-68823042



    apr24 MISSOURI

    It seems like it took a while, but one of the Missouri bills criminalizing artificial intelligence deepfakes made it through the House and it’s on to the Senate. About time.

    I wrote about H.B. 2628 weeks ago, and questioned whether its financial and criminal penalties — its teeth — were strong enough.

    This bill focuses on those who create false political communication, like when a robocall on Election Day featured a fake President Joe Biden telling people to stay away from the New Hampshire polls. A second Missouri House bill, H.B. 2573, also tackles deepfakes, but focuses on creating intimate digital depictions of people — porn — without their consent.

    It is still winding its way through the House. That bill, called the Taylor Swift Act, likely will get even more attention because of its celebrity name and lurid photo and video clones.

    But H.B. 2628 may be even more important because digital fakers have the potential to change the course of an election and, dare I say, democracy itself. It passed, overwhelmingly but not unanimously — 133 to 5, with 24 absent or voting present. It was sent on to the Senate floor where it had its first reading.

    https://ca.finance.yahoo.com/news/missouri-anti-deepfake-legislation-good-100700062.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ

    MAR24 CANADA

    Proposed amendments to the Canada Elections Act

    Backgrounder

    In Canada, the strength and resilience of our democracy is enhanced by a long tradition of regular evaluation and improvements to the Canada Elections Act (CEA). The CEA is the fundamental legislative framework that regulates Canada’s electoral process. It is independently administered by the Chief Electoral Officer and Elections Canada, with compliance and enforcement by the Commissioner of Canada Elections. The CEA is renowned for trailblazing political financing rules, strict spending limits, and robust reporting requirements intended to further transparency, fairness, and participation in Canada’s federal elections.

    Recent experiences and lessons from the 2019 and 2021 general elections highlighted opportunities to further remove barriers to voting and encourage voter participation, protect personal information, and strengthen electoral safeguards. The amendments to the CEA would advance these key priorities, reinforcing trust in federal elections, its participants, and its results.

    https://www.canada.ca/en/democratic-institutions/news/2024/03/proposed-amendments-to-the-canada-elections-act.html


    mar24
    New Hampshire

    The New Hampshire state House advanced a bill Thursday that would require political ads that use deceptive artificial intelligence (AI) disclose use of the technology, adding to growing momentum in states to add AI regulations for election protection. 

    The bill passed without debate in the state House and will advance to the state Senate.

    The bill advanced after New Hampshire voters received robocalls in January, ahead of the state’s primary elections, that included an AI-generated voice depicting President Biden. Steve Kramer, a veteran Democratic operative, admitted to being behind the fake robocalls and said he did so to draw attention to the dangers of AI in politics, NBC News reported in February.

    https://thehill.com/policy/technology/4563917-new-hampshire-house-passes-ai-election-rules-after-biden-deepfake/


    mar24

    A new Washington state law will make it illegal to share fake pornography that appears to depict real people having sex.

    Why it matters: Advancements in artificial intelligence have made it easy to use a single photograph to impose someone's features on realistic-looking "deepfake" porn.

    • Before now, however, state law hasn't explicitly banned these kinds of digitally manipulated images.

    Zoom in: The new Washington law, which Gov. Jay Inslee signed last week, will make it a gross misdemeanor to knowingly share fabricated intimate images of people without their consent.

    • People who create and share deepfake pornographic images of minors can be charged with felonies. So can those who share deepfake porn of adults more than once.
    • Victims will also be able to file civil lawsuits seeking damages.
    What they're saying: "With this law, survivors of intimate and fabricated image-based violence have a path to justice," said state Rep. Tina Orwall (D-Des Moines), who sponsored the legislation, in a news release.

    https://www.axios.com/local/seattle/2024/03/19/new-washington-law-criminalizes-deepfake-porn

    mar24Washington, Indiana ban AI porn images of real people
    Indiana, Utah, and New Mexico targeting AI in elections


    Half of the US population is now covered under state bans on nonconsensual explicit images made with artificial intelligence as part of a broader effort against AI-enabled abuses amid congressional inaction.

    Washington Gov. Jay Inslee (D) on March 14 signed legislation (HB 1999) that allows adult victims to sue the creators of such content used with the emerging technology.

    That followed Indiana Gov. Eric Holcomb (R) signing into law a similar bill (HB 1047) on March 12 that includes penalties such as misdemeanor charges for a first offense. Adult victims do not have a private right to action under the measure.

    The laws join an emerging patchwork of state-level restrictions on the use of artificial intelligence as federal lawmakers continue mulling their own approach to potential abuses by the technology. Ten states had such laws in place at the beginning of 2024: California, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, Virginia, Florida, and Georgia.

    https://news.bgov.com/states-of-play/more-states-ban-ai-deepfakes-months-after-taylor-swift-uproar?source=newsletter&item=body-link&region=text-section

    mar24

    The European Union is enacting the most comprehensive guardrails on the fast-developing world of artificial intelligence after the bloc’s parliament passed the AI Act on Wednesday.

    The landmark set of rules, in the absence of any legislation from the US, could set the tone for how AI is governed in the Western world. But the legislation’s passage comes as companies worry the law goes too far and digital watchdogs say it doesn’t go far enough.

    “Europe is now a global standard-setter in trustworthy AI,” Internal Market Commissioner Thierry Breton said in a statement.

    Thierry Breton, internal market commissioner for the European Union
    Photographer: Angel Garcia/Bloomberg

    The AI Act becomes law after member states sign off, which is usually a formality, and once it’s published in the EU’s Official Journal.

    The new law is intended to address worries about bias, privacy and other risks from the rapidly evolving technology. The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications. It would also place the first restrictions on generative AI tools, which captured the world’s attention last year with the popularity of ChatGPT.

    However, the bill has sparked concerns in the three months since officials reached a breakthrough provisional agreement after a marathon negotiation session that lasted more than 35 hours.

    As talks reached the final stretch last year, the French and German governments pushed back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European startups like France’s Mistral AI and Germany’s Aleph Alpha GmbH. Civil society groups like Corporate Europe Observatory (CEO) raised concerns about the influence that Big Tech and European companies had in shaping the final text.

    “This one-sided influence meant that ‘general purpose AI,’ was largely exempted from the rules and only required to comply with a few transparency obligations,” watchdogs including CEO and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks.

    A recent announcement that Mistral had partnered with Microsoft Corp.raised concerns from some lawmakers. Kai Zenner, a parliamentary assistant key in the writing of the act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and “maybe even necessary” for the French startup, but said “the EU legislator got played again.”

    Brando Benifei, a lawmaker and leading author of the act, said the results speaks for themselves. “The legislation is clearly defining the needs for safety of most powerful models with clear criteria, and so it’s clear that we stood on our feet,” he said Wednesday in a news conference.

    US and European companies have also raised concerns that the law will limit the bloc’s competitiveness.

    “With a limited digital tech industry and relatively low investment compared with industry giants like the United States and China, the EU’s ambitions of technological sovereignty and AI leadership face considerable hurdles,” wrote Raluca Csernatoni, a research fellow at the Carnegie Europe think tank.

    Lawmakers during Tuesday’s debate acknowledged that there is still significant work ahead. The EU is in the process of setting up its AI Office, an independent body within the European Commission. In practice, the office will be the key enforcer, with the ability to request information from companies developing generative AI and possibly ban a system from operating in the bloc.

    “The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering,” said Dragos Tudorache, a European Parliament member who was also one of the leading authors. “But making them all work in harmony with the desired eff

    https://news.bloomberglaw.com/artificial-intelligence/eu-embraces-new-ai-rules-despite-doubts-it-got-the-right-balance?source=breaking-news&item=headline&region=featured-story&login=blaw


    fev24
    GOERGIA

    The Georgia state House voted Thursday to crack down on deepfake artificial intelligence (AI) videos ahead of this year’s elections.

    The House voted 148-22 to approve the legislation, which attempts to stop the spread of misinformation from deceptive video impersonating candidates.

    The legislation, H.B. 986, would make it a felony to publish a deepfake within 90 days of an election with the intention of misleading or confusing voters about a candidate or their chance of being elected.

    The bill would allow the attorney general to have jurisdiction over the crimes and allow the state election board to publish the findings of investigations.

    One of the sponsors of the bill, state Rep. Brad Thomas (R), celebrated the vote on social media.

    “I am thrilled to inform you that House Bill 986 has successfully passed the House! This is a significant step towards upholding the integrity and impartiality of our electoral process by making the use of AI to interfere with elections a criminal offense,” Thomas posted on X, the platform formerly known as Twitter.^
    https://thehill.com/homenews/state-watch/4485098-georgia-house-approves-crackdown-on-deepfake-ai-videos-before-elections/

    fev24
    Washington
    In Washington, a new bill, HB 1999, is making waves in the fight against deepfakes. The legislation, introduced to address the alarming issue of sexually explicit content involving minors, aims to close legal loopholes and provide legal recourse for victims of deepfake abuse.
    https://bnnbreaking.com/tech/new-bill-hb-1999-takes-a-stand-against-deepfakes-in-washington

    fev24
    NEW MEXICo

    A proposal to require public disclosure whenever a political campaign in the state uses false information generated by artificial intelligence in a campaign advertisement gained approval from the New Mexico House of Representatives on Monday night.

    After about an hour of debate, the House voted 38-28 to pass House Bill 182, which would amend the state’s Campaign Reporting Act to require political campaigns to disclose whenever they use artificial intelligence in their ads, and would make it a crime to use artificially-generated ads to intentionally deceive voters.
    https://sourcenm.com/2024/02/13/deepfake-disclosure-bill-passes-nm-house/


    fev24

    Large tech platforms including TikTok, X and Facebook will soon have to identify AI-generated content in order to protect the upcoming European election from disinformation.

    "We know that this electoral period that's opening up in the European Union is going to be targeted either via hybrid attacks or foreign interference of all kinds," Internal Market Commissioner Thierry Breton told European lawmakers in Strasbourg on Wednesday. "We can't have half-baked measures."

    Breton didn't say when exactly companies will be compelled to label manipulated content under the EU's content moderation law, the Digital Services Act (DSA). Breton oversees the Commission branch enforcing the DSA on the largest European social media and video platforms, including Facebook, Instagram and YouTube.
    https://www.politico.eu/article/eu-big-tech-help-deepfake-proof-election-2024/


    jan24

    A bipartisan group of three senators is looking to give victims of sexually explicit deepfake images a way to hold their creators and distributors responsible.

    Sens. Dick Durbin, D-Ill.; Lindsey Graham, R-S.C.; and Josh Hawley, R-Mo., plan to introduce the Disrupt Explicit Forged Images and Non-Consensual Edits Act on Tuesday, a day ahead of a Senate Judiciary Committee hearing on internet safety with CEOs from Meta, X, Snap and other companies. Durbin chairs the panel, while Graham is the committee’s top Republican.

    Victims would be able to sue people involved in the creation and distribution of such images if the person knew or recklessly disregarded that the victim did not consent to the material. The bill would classify such material as a “digital forgery” and create a 10-year statute of limitations. 

    https://www.nbcnews.com/tech/tech-news/deepfake-bill-open-door-victims-sue-creators-rcna136434


    jan24

    South Korea

    South Korea's special parliamentary committee on Tuesday (Jan 30) passed a revision to the Public Official Election Act which called for a ban on political campaign videos that use AI-generated deepfakes in the election season.

    https://www.wionews.com/world/south-korea-imposes-90-day-ban-on-deepfake-political-campaign-videos-685152


    jan24

    As artificial intelligence starts to reshape society in ways predictable and not, some of Colorado’s highest-profile federal lawmakers are trying to establish guardrails without shutting down the technology altogether.

    U.S. Rep. Ken Buck, a Windsor Republican, is cosponsoring legislation with California Democrat Ted Lieu to create a national commission focused on regulating the technology and another bill to keep AI from unilaterally firing nuclear weapons.

    Sen. Michael Bennet, a Democrat, has publicly urged the leader of his caucus, Majority Leader Chuck Schumer, to carefully consider the path forward on regulating AI — while warning about the lessons learned from social media’s organic development. Sen. John Hickenlooper, also a Democrat, chaired a subcommittee hearing last September on the matter, too.

    https://www.denverpost.com/2024/01/28/artificial-intelligence-congress-regulation-colorado-michael-bennet-ken-buck-elections-deepfakes/

    jan24

    US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were viewed millions of times online.

    The images were posted on social media sites, including X and Telegram.

    US Representative Joe Morelle called the spread of the pictures "appalling".

    In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.

    It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."

    While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.

    The name "Taylor Swift" is no longer searchable on X, alongside terms such as "Taylor Swift AI" and "Taylor AI".

    https://www.bbc.com/news/technology-68110476



    jan24 Daily MAil

    The answer lies largely in the lack of laws to prosecute those who make such content.

    There is currently no federal legislation against the conduct and only six states – New York, MinnesotaTexasHawaii, Virginia and Georgia – have passed legislation which criminalizes it.

    In Texas, a bill was enacted in September 2023 which made it an offense to create or share deepfake images without permission which 'depict the person with the person's intimate parts exposed or engaged in sexual conduct'.

    The offense is a Class A misdemeanor and punishments include up to a year in prison and fines up to $4000.

    In Minnesota, the crime can carry a three-year sentence and fines up to $5,000.

    Several of these laws were introduced following earlier legislation which outlawed the use of deepfakes to influence an election, such as through the creation of fake images or videos which portray a politician or public official.

    A handful of other states, including California and Illinois, don't have laws against the act but instead allow deepfake victims to sue perpetrators. Critics have said this doesn't go far enough and that, in many cases, the creator is unknown.

     

    At the federal level, Joe Biden signed an executive order in October which called for a ban on the use of generative AI to make child abuse images or nonconsensual 'intimate images' of real people. But this was purely symbolic and does not create a means to punish makers.

    The finding that 415,000 deepfake images were posted online last year was made by Genevieve Oh, a researcher who analyzed the top ten websites which host such content.

    Oh also found 143,000 deepfake videos were uploaded in 2023 – more than during the previous six years combined. The videos, published across 40 different websites which host fake videos, were viewed more than 4.2 billion times.

    Outside of states where laws which criminalize the conduct exist, victims and prosecutors must rely on existing legislation which can be used to charge offenders.

    These include laws around cyberbullying, extortion and harassment. Victims who are blackmailed or subject to repeated abuse can attempt to use these laws against perpetrators who weaponize deepfake images.

    But they do not prohibit the fundamental act of creating a hyper-realistic, explicit photo of a person, then sharing it with the world, without their consent.

    A 14-year-old girl from New Jersey who was depicted in a pornographic deepfake image created by one of her male classmates is now leading a campaign to have a federal law passed.

    Francesca Mani and her mom, Dorota Mani, recently met with members of Congress at Capitol Hill to push for laws targeting perpetrators.

    https://www.dailymail.co.uk/news/article-13007753/deepfake-porn-laws-internet.html



    jan24

    COLUMBUS, Ohio (WCMH) – As AI becomes more popular, there is a rising concern about “deepfakes.” A deepfake is a “convincing image, video or audio hoax,” created using AI that impersonates someone or makes up an event that never happened.

    At the Ohio Statehouse, Representatives Brett Hillyer (R-Uhrichsville) and Adam Mathews (R-Lebanon) just introduced House Bill 367 to address issues that may arise with the new technology.

    Ohio mother faked child’s cancer battle for money, investigators say

    “In my day to day I see how important people’s name image and likeness and the copyright there is within it,” Mathews said.

    Mathews said the intent of the bill is to make sure everyone, not just high-profile people, is protected. Right now, Mathews said functionally, the only way one can go after someone for using their name, image or likeness (NIL) is if they’re using it to say you endorse a product or to defraud someone.

    “I wanted to put every single Ohioan at the same level as our most famous residents from Joe Burrow to Ryan Day,” Mathews said. “There are a lot of things people can do with your name, image and likeness that could be harmful to your psyche or reputation.”

    The bill makes it so any Ohioan can go after someone who uses their NIL for a deepfake. With fines as high as $15,000 dollars for the creation of a malicious deepfake; the court can also order that a deepfake be taken down if it is malicious.

    https://news.yahoo.com/bill-introduced-statehouse-protect-ohioans-230000251.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLnB0Lw&guce_referrer_sig=AQAAAENj6zcIxbhxJAo2pJ_AmmsXSVymCSWCxsInYs7yVnzFtgMmgXTbh3aCW4mrWfEnG8C_JQ_juc-EH5259FdOiw8tDTkLfe1UpxXxtl93u5IpvpnNv15CircmHtj6i1Rbz1b6mkqAkaYG6pZpGEMIVKs2KtScO62yGmLZOkvbJSmb


    jan24

    A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.

    The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.

    Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.

    The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.

    https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud


    jan24

    House lawmakers introduced legislation to try to curb the unauthorized use of deepfakes and voice clones.

    The legislation, the No AI Fraud Act, is sponsored by Rep. Maria Salazar (R-FL), Rep. Madeleine Dean (D-PA), Rep. Nathaniel Moran (R-TX), Rep. Joe Morelle (D-NY) and Rep. Rob Wittman (R-VA). The legislation would give individuals more control over the use of their identifying characteristics in digital replicas. It affirms that every person has a “property right in their own likeness and voice,” and do not expire upon a person’s death. The rights can be transferred to the heirs or designees for a period of 10 years after the individual’s death. It sets damages at $50,000 for each unauthorized violation by a personalized cloing service, or the actual damages suffered plus profits from the use. Damages are set at $5,000 per violation for unauthorized publication, performance, distribution or transmission of a digital voice replica or digital depiction, or the actual damages.

    https://deadline.com/2024/01/ai-legislation-deepfakes-house-of-representatives-1235708983/


    jan24

    Illinois

    Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

    The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

    In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

    Some sponsors of the legislation, notably chief sponsor Rep. Jennifer Gong-Gershowitz, D-Glenview, have indicated interest in further regulating the use of artificial intelligence.

    https://chicagocrusader.com/more-than-300-statutes-became-law-in-the-new-year/

    dez23

    Prohibition on book bans, right to sue for ‘deepfake porn’ among new laws taking effect Jan. 1

    Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

    The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

    In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

    https://www.nprillinois.org/illinois/2023-12-26/prohibition-on-book-bans-right-to-sue-for-deepfake-porn-among-new-laws-taking-effect-jan-1

    Out23 NOVA IORQUE NOVA LEi

    New York Bans Deepfake Revenge Porn Distribution as AI Use Grows

    New York Gov. Kathy Hochul (D) on Friday signed into law legislation banning the dissemination of pornographic images made with artificial intelligence without the consent of the subject.

    https://news.bloomberglaw.com/in-house-counsel/n-y-outlaws-unlawful-publication-of-deepfake-revenge-porn

    https://hudsonvalleyone.com/2023/10/15/deepfake-porn-in-new-york-state-means-jail-time/



    out23 PROPOSTA

    Bill would ban 'deepfake' pornography in Wisconsin


    https://eu.jsonline.com/story/news/politics/2023/10/02/proposed-legislation-targets-deepfake-pornography/71033726007/

    saet23 NI Bill

    Assemblymember Amy Paulin’s (D-Scarsdale) legislation, which makes the nonconsensual use of “deepfake” images disseminated in online communities a criminal offense, has been signed into law by Governor Hochul.

    “Deepfakes” are fake or altered images or videos created through the use of artificial intelligence. Many of these images and videos map a face onto a pornographic image or video. Some create a pornographic image or video out of a still photograph. These pornographic images and films are sometimes posted online without the consent of those in them – often with devastating consequences to those portrayed in the images.

    https://talkofthesound.com/2023/09/25/amy-paulin-dissemination-of-deepfake-images-now-a-crime-in-new-york/

    Clarke told ABC News that her DEEPFAKES Accountability Act would provide prosecutors, regulators and particularly victims with resources, like detection technology, that Clarke believes they need to stand up against the threat posed by nefarious deepfakes.

    https://abcnews.go.com/Politics/bill-criminalize-extremely-harmful-online-deepfakes/story?id=103286802~



    set23

    Multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence) are referred to as deepfakes.

    'Contextualizing Deepfake Threats to Organizations' PDF (arquivo)

    https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

    set23

    Virginia revenge porn law updated to include deepfakes


    https://eu.usatoday.com/videos/tech/2023/09/12/virginia-revenge-porn-law-updated-include-deepfakes/1637140001/

     

    ================

    set23

    Artificial intelligence’s ability to generate deepfake content that easily fools humans poses a genuine threat to financial markets, the head of the Securities and Exchange Commission warned.

    https://news.bloomberglaw.com/artificial-intelligence/deepfakes-pose-real-risk-to-financial-markets-secs-gensler?source=newsletter&item=body-link&region=text-section