Wednesday, September 13, 2023

ATUALIZAÇÃO DO TRABALHO sete23

dec24

Deepfakes Barely Impacted 2024 Elections Because They Aren’t Very Good, Research Finds

AI is abundant, but people are good at recognizing when an image has been created using the technology.
https://gizmodo.com/deepfakes-had-little-impact-on-2024-elections-because-they-arent-very-good-research-finds-2000543717

dec24

. We argue that the current definition of deep fakes in the AI act and the corresponding obligations are not sufficiently specif ied to tackle the challenges posed by deep fakes. By analyzing the life cycle of a digital photo from the camera sensor to the digital editing features, we find that: (1.) Deep fakes are ill-defined in the EU AI Act. The definition leaves too much scope for what a deep fake is. (2.) It is unclear how editing functions like Google’s “best take” feature can be considered as an exception to transparency obligations. (3.) The exception for substantially edited images raises questions about what constitutes substantial editing of content and whether or not this editing must be perceptible by a natural person.



dec24

A bipartisan bill to combat the spread of AI-generated, or deepfake, revenge pornography online unanimously passed the Senate Tuesday.

The TAKE IT DOWN Act, co-authored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minn., would make it unlawful to knowingly publish non-consensual intimate imagery, including deepfake imagery, that depict nude and explicit content in interstate commerce.

https://broadbandbreakfast.com/bill-targeting-deepfake-porn-unanimously-passes-senate/


dec24

Georgia lawmakers are pondering a future in which AI bots must disclose their non-human status and deepfakes that sow confusion come with severe criminal penalties.

Why it matters: The Georgia Senate Study Committee on Artificial Intelligence's report, released Tuesday, offers glimpses into lawmakers' views on the tech and how much (or little) they plan to regulate the coming wave of bots.

Context: For the past seven months, the committee has heard testimony from academics, business leaders and policy experts about AI's effect on industries such as agriculture, entertainment, government and transparency.

The big picture: The study committee stops short of proposing specific legislation, but generally recommends that future laws would "support AI regulation without stifling innovation."

https://www.axios.com/local/atlanta/2024/12/04/georgia-ai-study-committee-deepfakes




out24

Keywords

First Amendment; constitutional law; artificial intelligence; deepfakes; political campaigns; political advertisements; A.I.

Abstract

In recent years, artificial intelligence (AI) technology has developed rapidly. Accompanying this advancement in sophistication and accessibility are various societal benefits and risks. For example, political campaigns and political action committees have begun to use AI in advertisements to generate deepfakes of opposing candidates to influence voters. Deepfakes of political candidates interfere with voters’ ability to discern falsity from reality and make informed decisions at the ballot box. As a result, these deepfakes pose a threat to the integrity of elections and the existence of democracy. Despite the dangers of deepfakes, regulating false political speech raises significant First Amendment questions.

This Note considers whether the Protect Elections from Deceptive AI Act, a proposed federal ban of AI-generated deepfakes portraying federal candidates in political advertisements, is constitutional. This Note concludes that the bill is constitutional under the First Amendment and that less speech restrictive alternatives fail to address the risks of deepfakes. Finally, this Note suggests revisions to narrow the bill’s application and ensure its apolitical enforcement.

https://ir.lawnet.fordham.edu/flr/vol93/iss1/7/



set24
2024 Legis. Info. Bull. 38 (2024)
Deepfakes and Artificial Intelligence: A New Legal Challenge at European and National Levels?

https://heinonline.org/HOL/LandingPage?handle=hein.journals/lgveifn2024&div=16&id=&page=


set24

Half of U.S. states seek to crack down on AI in elections

As the 2024 election cycle ramps up, at least 26 states have passed or are considering bills 
regulating the use of generative AI in election-related communications, a new analysis by Axios shows.

Why it matters: The review lays bare a messy patchwork of rules around the use of genAI in politics, 
as experts increasingly sound the alarm on the evolving technology's power 
https://www.axios.com/2024/09/22/ai-regulation-election-laws-map


Sep 17, 2024

Governor Newsom signs bills to combat deepfake election content

https://www.gov.ca.gov/2024/09/17/governor-newsom-signs-bills-to-combat-deepfake-election-content/


set24

SACRAMENTO, Calif. (AP) — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.

The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom’s desk. Their deadline is Saturday.

The Democratic governor has until Sept. 30 to sign the proposals, veto them or let them become law without his signature. Newsom signaled in July he will sign a proposal to crack down on election deepfakes but has not weighed in other legislation.

https://apnews.com/article/california-ai-election-deepfakes-safety-regulations-eb6bbc80e346744dbb250f931ebca9f3

ago24

Is a State AI Patchwork Next? AI Legislation at a State Level in 2024

19 Aug 2024

While Congress debates what, if any, actions are needed around artificial intelligence (AI), many states have passed or considered their own legislation. This did not start in 2024, but it certainly accelerated, with at least 40 states considering AI legislation. Such a trend is not unique to AI, but certain actions at a state level could be particularly disruptive to the development of this technology. In some cases, states could also show the many beneficial applications of the technology, well beyond popular services such as ChatGPT. An Overview of AI Legislation at a State Level in 2024 As of August 2024, 31 states have passed some form of AI legislation. However, what AI legislation seeks to regulate varies widely among the states. For example, at least 22 have passed laws regulating the use of deepfake images, usually in the scope of sexual or election-related deepfakes, while 11 states have passed laws requiring that corporations disclose the use of AI or collection of data for AI model training in some contexts. States are also exploring how the government can use AI. Concerningly, Colorado has passed a significant regulatory regime for many aspects of AI, while California continues to consider such a regime.
https://policycommons.net/artifacts/15470461/is-a-state-ai-patchwork-next-ai-legislation-at-a-state-level-in-2024/16363852/




ago24 (FEDERAL)

Legislation is finally starting to catch up to AI, with a new law allowing victims of non-consensual deepfake pornography to sue those responsible passing the US senate in unanimous fashion.

Deepfake technology has gotten a lot better since the boom in AI over the last few years. While some instances are fun and harmless, others have proven to be quite a problem, imitating celebrities to scam users or putting them in problematic situations.

However, this new law could be a stepping stone to more AI and deepfake regulation, and all we can say is: it’s about time.
The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act is a piece of legislature in the US currently on the way to becoming a law. It states that, in the event of non-consensual deepfake pornography, the victim is able to sue the party responsible.
https://tech.co/news/anti-deepfake-law-passes-us-senate

agot24
European Union Law Working Papers No. 95 The EU Regulatory Framework for Artificial Intelligence Karina Issina

This thesis presents аn аnаlysis of the Europeаn Union's regulаtory frаmework for аrtificiаl intelligence. Аs АI technologies become increаsingly integrаl to vаrious sectors, the necessity for а cleаr regulаtory structure to аddress the аssociаted ethicаl, legаl, аnd societаl chаllenges is becoming essentiаl. The Europeаn Union hаs positioned itself аs а globаl leаder in АI governаnce, аiming to creаte а bаlаnced environment thаt fosters innovаtion while sаfeguаrding fundаmentаl rights аnd ethicаl stаndаrds. The thesis exаmines the evolution of АI regulаtion within the EU, highlighting legislаtive instruments such аs the Generаl Dаtа Protection Regulаtion аnd the recently аdopted Аrtificiаl Intelligence Аct. It explores the core principles underpinning the EU's regulаtory аpproаch, including trаnspаrency, аccountаbility, humаn-centricity, аnd risk-bаsed regulаtion, аnd how these principles аre integrаted in legislаtive meаsures. А thorough аnаlysis of the regulаtory texts аnd relаted policy documents is conducted to explore the EU's strаtegic objectives аnd regulаtory mechаnisms. The reseаrch identifies аnd discusses key regulаtory themes, such аs dаtа protection, аlgorithmic trаnspаrency, biаs mitigаtion, аnd the delineаtion of high-risk АI аpplicаtions. Аdditionаlly, it investigаtes the implicаtions of these regulаtions for АI development аnd deployment within the EU. Compаrаtive аnаlysis with non-EU regulаtory frаmeworks is аlso incorporаted to contextuаlize the EU's аpproаch within the globаl АI governаnce ecosystem. Findings suggest thаt the EU's regulаtory frаmework for АI is both comprehensive аnd pioneering, setting а high stаndаrd for ethicаl АI governаnce. However, the dynаmic nаture of АI technology necessitаtes ongoing regulаtory аdаptаtion аnd refinement. The study concludes with recommendаtions for enhаncing the regulаtory frаmework to ensure it remаins responsive to technologicаl аdvаncements аnd continues to uphold the EU's commitment to ethicаl аnd responsible АI. 
https://law.stanford.edu/wp-content/uploads/2024/07/EU-Law-WP-95-Issina.pdf

ago24

Senate Majority Leader Chuck Schumer (D-N.Y.) said in a recent interview he will continue to push for the regulation of artificial intelligence (AI) in elections.

“Look, deepfakes are a serious, serious threat to this democracy. If people can no longer believe that the person they’re hearing speak is actually the person, this democracy has suffered — it will suffer — in ways that we have never seen before,” Schumer said last week in an interview with NBC News. “And if people just get turned off to democracy, Lord knows what will happen.”

With fewer than 100 days left until the November election, Schumer spoke to the outlet about the impact AI could have on the election process. He said he hopes to bring more legislation about AI to the Senate floor in the coming months.

His recent push for regulation of the emerging technology in elections comes after billionaire Elon Musk a shared a fake video of Vice President Harris using AI to mimic her voice and spew insults about her campaign and President Biden — who dropped out of the 2024 race last month and subsequently endorsed the vice president.

https://thehill.com/policy/technology/4813186-chuck-schumer-artificial-intelligence-regulation-congress-2024-election/

agot24

States are rapidly adopting laws to grapple with political deepfakes in lieu of comprehensive federal regulation of manipulated media related to elections, according to a new report from the Brennan Center for Justice.

Nineteen states passed laws regulating deepfakes in elections, and 26 others considered related bills. But an NBC News review of the laws and a new analysis from the Brennan Center, a nonpartisan law and policy institute affiliated with New York University School of Law, finds that most states’ deepfake laws are so broad that they would face tough court challenges, while a few are so narrow that they leave plenty of options for bad actors to use the technology to deceive voters.

“It’s actually quite incredible how many of these laws have passed,” said Larry Norden, vice president of the Brennan Center’s Elections and Government Program and the author of the analysis released Tuesday.

The study found that states introduced 151 different bills this year that addressed deepfakes and other deceptive media meant to fool voters, about a quarter of all state AI laws introduced.

“That’s not something you generally see, and I think it is a reflection of how quickly this technology has evolved and how concerned legislators are that it could impact political campaigns,” he said.


https://www.nbcnews.com/tech/tech-news/states-are-rapidly-adopting-laws-regulating-political-deepfakes-rcna164578

jul24

A bipartisan group of U.S. senators has introduced legislation intended to counter the rise of deepfakes and protect creators from theft through generative artificial intelligence.

"Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those

The bill, called the Content Origin Protection and Integrity from Edited and Deepfaked Media Act, or COPIED Act, is co-sponsored by Blackburn, Maria Cantwell (D-Wash.), and Martin Heinrich (D-N.M.), who is also a member of the Senate AI Working Group.
https://seekingalpha.com/news/4124026-us-senators-introduce-bipartisan-bill-to-counter-gen-ai-deepfakes

jun24

Federal legislation to combat deepfakes

Currently, there is no comprehensive enacted federal legislation in the United States that bans or even regulates deepfakes. However, the Identifying Outputs of Generative Adversarial Networks Act requires the director of the National Science Foundation to support research for the development and measurement of standards needed to generate GAN outputs and any other comparable techniques developed in the future.

Congress is considering additional legislation that, if passed, would regulate the creation, disclosure, and dissemination of deepfakes. Some of this legislation includes the Deepfake Report Act of 2019, which requires the Science and Technology directorate in the U.S. Department of Homeland Security to report at specified intervals on the state of digital content forgery technology; the DEEPFAKES Accountability Act, which aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes; the DEFIANCE Act of 2024, which would improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries and for other purposes; and the Protecting Consumers from Deceptive AI Act, which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by GenAI, to ensure that audio or visual content created or substantially modified by GenAI includes a disclosure acknowledging the GenAI origin of such content, and for other purposes.

States pursue deepfake legislation

In addition, several states have enacted legislation to regulate deepfakes, including:

https://www.thomsonreuters.com/en-us/posts/government/deepfakes-federal-state-regulation/


jun24

MIDDLETON, Wis.June 27, 2024 /PRNewswire/ -- Deepfakes, an offshoot of Artificial Intelligence (AI), have become a pressing social and political issue that an increasing number of state lawmakers are trying to address through legislation. The number of bills in this space has grown from an average of 28 per year from 2019-2023, to 294 bills introduced to date in 2024.

That's why Ballotpedia, the nation's premiere source for unbiased information on elections, politics, and policy has created and launched a comprehensive AI Deepfake Legislation Tracker and Ballotpedia's State of Deepfake Legislation 2024 Annual Report, available here.

The goal of Ballotpedia's newest tracker is simple: to let people know what's happening—in real time—with deepfake legislation in all 50 states. The tracker provides historical context on deepfake legislation going back to 2019 and covers these topics:

https://www.prnewswire.com/news-releases/deepfake-related-bills-have-increased-950-over-the-previous-five-year-average-302183982.html


jun24

New legislation in Michigan would penalize people for using technology to create and distribute deepfake pornography. 

"It happens a lot to younger women, girls, students as a kind of bullying technique," said state Rep. Penelope Tsernoglou. "It causes a lot of mental distress; in some cases, financial issues, reputational harm, and even some really severe cases could lead to self-harm and suicide."

In an instance of bipartisanship, the package of bills passed the House earlier this week with a wide margin of 108-2. The bills would make it illegal to create and distribute digitally altered pictures or videos that falsely show sexual activity. 
https://www.cbsnews.com/detroit/news/deepfake-legislation-passes-michigan-house-by-wide-margin/

apr24
On the way to deep fake democracy? Deep fakes in election campaigns in 2023Research
Open access
Published: 26 April 2024
https://link.springer.com/article/10.1057/s41304-024-00482-9

apr24

Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.

Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.

Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.

Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.

https://www.wired.com/story/the-biggest-deepfake-porn-website-is-now-blocked-in-the-uk/


ap24

IndianaTexas and Virginia in the past few years have enacted broad laws with penalties of up to a year in jail plus fines for anyone found guilty of sharing deepfake pornography. In Hawaii, the punishment is up to five years in prison.

Many states are combatting deepfake porn by adding to existing laws. Several, including Indiana, New York and Virginia, have enacted laws that add deepfakes to existing prohibitions on so-called revenge porn, or the posting of sexual images of a former partner without their consent. Georgia and Hawaii have targeted deepfake porn by updating their privacy laws.

Other states, such as FloridaSouth Dakota and Washington, have enacted laws that update the definition of child pornography to include deepfakes. Washington’s law, which was signed by Democratic Gov. Jay Inslee in March, makes it illegal to be in possession of a “fabricated depiction of an identifiable minor” engaging in a sexually explicit act — a crime punishable by up to a year in jail.

https://missouriindependent.com/2024/04/16/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/



APRReino Unido

The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.

Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.

It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.

And if the image is then shared more widely, they could face jail.

A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.

https://www.bbc.com/news/uk-68823042



apr24 MISSOURI

It seems like it took a while, but one of the Missouri bills criminalizing artificial intelligence deepfakes made it through the House and it’s on to the Senate. About time.

I wrote about H.B. 2628 weeks ago, and questioned whether its financial and criminal penalties — its teeth — were strong enough.

This bill focuses on those who create false political communication, like when a robocall on Election Day featured a fake President Joe Biden telling people to stay away from the New Hampshire polls. A second Missouri House bill, H.B. 2573, also tackles deepfakes, but focuses on creating intimate digital depictions of people — porn — without their consent.

It is still winding its way through the House. That bill, called the Taylor Swift Act, likely will get even more attention because of its celebrity name and lurid photo and video clones.

But H.B. 2628 may be even more important because digital fakers have the potential to change the course of an election and, dare I say, democracy itself. It passed, overwhelmingly but not unanimously — 133 to 5, with 24 absent or voting present. It was sent on to the Senate floor where it had its first reading.

https://ca.finance.yahoo.com/news/missouri-anti-deepfake-legislation-good-100700062.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ

MAR24 CANADA

Proposed amendments to the Canada Elections Act

Backgrounder

In Canada, the strength and resilience of our democracy is enhanced by a long tradition of regular evaluation and improvements to the Canada Elections Act (CEA). The CEA is the fundamental legislative framework that regulates Canada’s electoral process. It is independently administered by the Chief Electoral Officer and Elections Canada, with compliance and enforcement by the Commissioner of Canada Elections. The CEA is renowned for trailblazing political financing rules, strict spending limits, and robust reporting requirements intended to further transparency, fairness, and participation in Canada’s federal elections.

Recent experiences and lessons from the 2019 and 2021 general elections highlighted opportunities to further remove barriers to voting and encourage voter participation, protect personal information, and strengthen electoral safeguards. The amendments to the CEA would advance these key priorities, reinforcing trust in federal elections, its participants, and its results.

https://www.canada.ca/en/democratic-institutions/news/2024/03/proposed-amendments-to-the-canada-elections-act.html


mar24
New Hampshire

The New Hampshire state House advanced a bill Thursday that would require political ads that use deceptive artificial intelligence (AI) disclose use of the technology, adding to growing momentum in states to add AI regulations for election protection. 

The bill passed without debate in the state House and will advance to the state Senate.

The bill advanced after New Hampshire voters received robocalls in January, ahead of the state’s primary elections, that included an AI-generated voice depicting President Biden. Steve Kramer, a veteran Democratic operative, admitted to being behind the fake robocalls and said he did so to draw attention to the dangers of AI in politics, NBC News reported in February.

https://thehill.com/policy/technology/4563917-new-hampshire-house-passes-ai-election-rules-after-biden-deepfake/


mar24

A new Washington state law will make it illegal to share fake pornography that appears to depict real people having sex.

Why it matters: Advancements in artificial intelligence have made it easy to use a single photograph to impose someone's features on realistic-looking "deepfake" porn.

  • Before now, however, state law hasn't explicitly banned these kinds of digitally manipulated images.

Zoom in: The new Washington law, which Gov. Jay Inslee signed last week, will make it a gross misdemeanor to knowingly share fabricated intimate images of people without their consent.

  • People who create and share deepfake pornographic images of minors can be charged with felonies. So can those who share deepfake porn of adults more than once.
  • Victims will also be able to file civil lawsuits seeking damages.
What they're saying: "With this law, survivors of intimate and fabricated image-based violence have a path to justice," said state Rep. Tina Orwall (D-Des Moines), who sponsored the legislation, in a news release.

https://www.axios.com/local/seattle/2024/03/19/new-washington-law-criminalizes-deepfake-porn

mar24Washington, Indiana ban AI porn images of real people
Indiana, Utah, and New Mexico targeting AI in elections


Half of the US population is now covered under state bans on nonconsensual explicit images made with artificial intelligence as part of a broader effort against AI-enabled abuses amid congressional inaction.

Washington Gov. Jay Inslee (D) on March 14 signed legislation (HB 1999) that allows adult victims to sue the creators of such content used with the emerging technology.

That followed Indiana Gov. Eric Holcomb (R) signing into law a similar bill (HB 1047) on March 12 that includes penalties such as misdemeanor charges for a first offense. Adult victims do not have a private right to action under the measure.

The laws join an emerging patchwork of state-level restrictions on the use of artificial intelligence as federal lawmakers continue mulling their own approach to potential abuses by the technology. Ten states had such laws in place at the beginning of 2024: California, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, Virginia, Florida, and Georgia.

https://news.bgov.com/states-of-play/more-states-ban-ai-deepfakes-months-after-taylor-swift-uproar?source=newsletter&item=body-link&region=text-section

mar24

The European Union is enacting the most comprehensive guardrails on the fast-developing world of artificial intelligence after the bloc’s parliament passed the AI Act on Wednesday.

The landmark set of rules, in the absence of any legislation from the US, could set the tone for how AI is governed in the Western world. But the legislation’s passage comes as companies worry the law goes too far and digital watchdogs say it doesn’t go far enough.

“Europe is now a global standard-setter in trustworthy AI,” Internal Market Commissioner Thierry Breton said in a statement.

Thierry Breton, internal market commissioner for the European Union
Photographer: Angel Garcia/Bloomberg

The AI Act becomes law after member states sign off, which is usually a formality, and once it’s published in the EU’s Official Journal.

The new law is intended to address worries about bias, privacy and other risks from the rapidly evolving technology. The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications. It would also place the first restrictions on generative AI tools, which captured the world’s attention last year with the popularity of ChatGPT.

However, the bill has sparked concerns in the three months since officials reached a breakthrough provisional agreement after a marathon negotiation session that lasted more than 35 hours.

As talks reached the final stretch last year, the French and German governments pushed back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European startups like France’s Mistral AI and Germany’s Aleph Alpha GmbH. Civil society groups like Corporate Europe Observatory (CEO) raised concerns about the influence that Big Tech and European companies had in shaping the final text.

“This one-sided influence meant that ‘general purpose AI,’ was largely exempted from the rules and only required to comply with a few transparency obligations,” watchdogs including CEO and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks.

A recent announcement that Mistral had partnered with Microsoft Corp.raised concerns from some lawmakers. Kai Zenner, a parliamentary assistant key in the writing of the act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and “maybe even necessary” for the French startup, but said “the EU legislator got played again.”

Brando Benifei, a lawmaker and leading author of the act, said the results speaks for themselves. “The legislation is clearly defining the needs for safety of most powerful models with clear criteria, and so it’s clear that we stood on our feet,” he said Wednesday in a news conference.

US and European companies have also raised concerns that the law will limit the bloc’s competitiveness.

“With a limited digital tech industry and relatively low investment compared with industry giants like the United States and China, the EU’s ambitions of technological sovereignty and AI leadership face considerable hurdles,” wrote Raluca Csernatoni, a research fellow at the Carnegie Europe think tank.

Lawmakers during Tuesday’s debate acknowledged that there is still significant work ahead. The EU is in the process of setting up its AI Office, an independent body within the European Commission. In practice, the office will be the key enforcer, with the ability to request information from companies developing generative AI and possibly ban a system from operating in the bloc.

“The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering,” said Dragos Tudorache, a European Parliament member who was also one of the leading authors. “But making them all work in harmony with the desired eff

https://news.bloomberglaw.com/artificial-intelligence/eu-embraces-new-ai-rules-despite-doubts-it-got-the-right-balance?source=breaking-news&item=headline&region=featured-story&login=blaw


fev24
GOERGIA

The Georgia state House voted Thursday to crack down on deepfake artificial intelligence (AI) videos ahead of this year’s elections.

The House voted 148-22 to approve the legislation, which attempts to stop the spread of misinformation from deceptive video impersonating candidates.

The legislation, H.B. 986, would make it a felony to publish a deepfake within 90 days of an election with the intention of misleading or confusing voters about a candidate or their chance of being elected.

The bill would allow the attorney general to have jurisdiction over the crimes and allow the state election board to publish the findings of investigations.

One of the sponsors of the bill, state Rep. Brad Thomas (R), celebrated the vote on social media.

“I am thrilled to inform you that House Bill 986 has successfully passed the House! This is a significant step towards upholding the integrity and impartiality of our electoral process by making the use of AI to interfere with elections a criminal offense,” Thomas posted on X, the platform formerly known as Twitter.^
https://thehill.com/homenews/state-watch/4485098-georgia-house-approves-crackdown-on-deepfake-ai-videos-before-elections/

fev24
Washington
In Washington, a new bill, HB 1999, is making waves in the fight against deepfakes. The legislation, introduced to address the alarming issue of sexually explicit content involving minors, aims to close legal loopholes and provide legal recourse for victims of deepfake abuse.
https://bnnbreaking.com/tech/new-bill-hb-1999-takes-a-stand-against-deepfakes-in-washington

fev24
NEW MEXICo

A proposal to require public disclosure whenever a political campaign in the state uses false information generated by artificial intelligence in a campaign advertisement gained approval from the New Mexico House of Representatives on Monday night.

After about an hour of debate, the House voted 38-28 to pass House Bill 182, which would amend the state’s Campaign Reporting Act to require political campaigns to disclose whenever they use artificial intelligence in their ads, and would make it a crime to use artificially-generated ads to intentionally deceive voters.
https://sourcenm.com/2024/02/13/deepfake-disclosure-bill-passes-nm-house/


fev24

Large tech platforms including TikTok, X and Facebook will soon have to identify AI-generated content in order to protect the upcoming European election from disinformation.

"We know that this electoral period that's opening up in the European Union is going to be targeted either via hybrid attacks or foreign interference of all kinds," Internal Market Commissioner Thierry Breton told European lawmakers in Strasbourg on Wednesday. "We can't have half-baked measures."

Breton didn't say when exactly companies will be compelled to label manipulated content under the EU's content moderation law, the Digital Services Act (DSA). Breton oversees the Commission branch enforcing the DSA on the largest European social media and video platforms, including Facebook, Instagram and YouTube.
https://www.politico.eu/article/eu-big-tech-help-deepfake-proof-election-2024/


jan24

A bipartisan group of three senators is looking to give victims of sexually explicit deepfake images a way to hold their creators and distributors responsible.

Sens. Dick Durbin, D-Ill.; Lindsey Graham, R-S.C.; and Josh Hawley, R-Mo., plan to introduce the Disrupt Explicit Forged Images and Non-Consensual Edits Act on Tuesday, a day ahead of a Senate Judiciary Committee hearing on internet safety with CEOs from Meta, X, Snap and other companies. Durbin chairs the panel, while Graham is the committee’s top Republican.

Victims would be able to sue people involved in the creation and distribution of such images if the person knew or recklessly disregarded that the victim did not consent to the material. The bill would classify such material as a “digital forgery” and create a 10-year statute of limitations. 

https://www.nbcnews.com/tech/tech-news/deepfake-bill-open-door-victims-sue-creators-rcna136434


jan24

South Korea

South Korea's special parliamentary committee on Tuesday (Jan 30) passed a revision to the Public Official Election Act which called for a ban on political campaign videos that use AI-generated deepfakes in the election season.

https://www.wionews.com/world/south-korea-imposes-90-day-ban-on-deepfake-political-campaign-videos-685152


jan24

As artificial intelligence starts to reshape society in ways predictable and not, some of Colorado’s highest-profile federal lawmakers are trying to establish guardrails without shutting down the technology altogether.

U.S. Rep. Ken Buck, a Windsor Republican, is cosponsoring legislation with California Democrat Ted Lieu to create a national commission focused on regulating the technology and another bill to keep AI from unilaterally firing nuclear weapons.

Sen. Michael Bennet, a Democrat, has publicly urged the leader of his caucus, Majority Leader Chuck Schumer, to carefully consider the path forward on regulating AI — while warning about the lessons learned from social media’s organic development. Sen. John Hickenlooper, also a Democrat, chaired a subcommittee hearing last September on the matter, too.

https://www.denverpost.com/2024/01/28/artificial-intelligence-congress-regulation-colorado-michael-bennet-ken-buck-elections-deepfakes/

jan24

US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were viewed millions of times online.

The images were posted on social media sites, including X and Telegram.

US Representative Joe Morelle called the spread of the pictures "appalling".

In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.

It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."

While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.

The name "Taylor Swift" is no longer searchable on X, alongside terms such as "Taylor Swift AI" and "Taylor AI".

https://www.bbc.com/news/technology-68110476



jan24 Daily MAil

The answer lies largely in the lack of laws to prosecute those who make such content.

There is currently no federal legislation against the conduct and only six states – New York, MinnesotaTexasHawaii, Virginia and Georgia – have passed legislation which criminalizes it.

In Texas, a bill was enacted in September 2023 which made it an offense to create or share deepfake images without permission which 'depict the person with the person's intimate parts exposed or engaged in sexual conduct'.

The offense is a Class A misdemeanor and punishments include up to a year in prison and fines up to $4000.

In Minnesota, the crime can carry a three-year sentence and fines up to $5,000.

Several of these laws were introduced following earlier legislation which outlawed the use of deepfakes to influence an election, such as through the creation of fake images or videos which portray a politician or public official.

A handful of other states, including California and Illinois, don't have laws against the act but instead allow deepfake victims to sue perpetrators. Critics have said this doesn't go far enough and that, in many cases, the creator is unknown.

 

At the federal level, Joe Biden signed an executive order in October which called for a ban on the use of generative AI to make child abuse images or nonconsensual 'intimate images' of real people. But this was purely symbolic and does not create a means to punish makers.

The finding that 415,000 deepfake images were posted online last year was made by Genevieve Oh, a researcher who analyzed the top ten websites which host such content.

Oh also found 143,000 deepfake videos were uploaded in 2023 – more than during the previous six years combined. The videos, published across 40 different websites which host fake videos, were viewed more than 4.2 billion times.

Outside of states where laws which criminalize the conduct exist, victims and prosecutors must rely on existing legislation which can be used to charge offenders.

These include laws around cyberbullying, extortion and harassment. Victims who are blackmailed or subject to repeated abuse can attempt to use these laws against perpetrators who weaponize deepfake images.

But they do not prohibit the fundamental act of creating a hyper-realistic, explicit photo of a person, then sharing it with the world, without their consent.

A 14-year-old girl from New Jersey who was depicted in a pornographic deepfake image created by one of her male classmates is now leading a campaign to have a federal law passed.

Francesca Mani and her mom, Dorota Mani, recently met with members of Congress at Capitol Hill to push for laws targeting perpetrators.

https://www.dailymail.co.uk/news/article-13007753/deepfake-porn-laws-internet.html



jan24

COLUMBUS, Ohio (WCMH) – As AI becomes more popular, there is a rising concern about “deepfakes.” A deepfake is a “convincing image, video or audio hoax,” created using AI that impersonates someone or makes up an event that never happened.

At the Ohio Statehouse, Representatives Brett Hillyer (R-Uhrichsville) and Adam Mathews (R-Lebanon) just introduced House Bill 367 to address issues that may arise with the new technology.

Ohio mother faked child’s cancer battle for money, investigators say

“In my day to day I see how important people’s name image and likeness and the copyright there is within it,” Mathews said.

Mathews said the intent of the bill is to make sure everyone, not just high-profile people, is protected. Right now, Mathews said functionally, the only way one can go after someone for using their name, image or likeness (NIL) is if they’re using it to say you endorse a product or to defraud someone.

“I wanted to put every single Ohioan at the same level as our most famous residents from Joe Burrow to Ryan Day,” Mathews said. “There are a lot of things people can do with your name, image and likeness that could be harmful to your psyche or reputation.”

The bill makes it so any Ohioan can go after someone who uses their NIL for a deepfake. With fines as high as $15,000 dollars for the creation of a malicious deepfake; the court can also order that a deepfake be taken down if it is malicious.

https://news.yahoo.com/bill-introduced-statehouse-protect-ohioans-230000251.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLnB0Lw&guce_referrer_sig=AQAAAENj6zcIxbhxJAo2pJ_AmmsXSVymCSWCxsInYs7yVnzFtgMmgXTbh3aCW4mrWfEnG8C_JQ_juc-EH5259FdOiw8tDTkLfe1UpxXxtl93u5IpvpnNv15CircmHtj6i1Rbz1b6mkqAkaYG6pZpGEMIVKs2KtScO62yGmLZOkvbJSmb


jan24

A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.

The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.

Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.

The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.

https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud


jan24

House lawmakers introduced legislation to try to curb the unauthorized use of deepfakes and voice clones.

The legislation, the No AI Fraud Act, is sponsored by Rep. Maria Salazar (R-FL), Rep. Madeleine Dean (D-PA), Rep. Nathaniel Moran (R-TX), Rep. Joe Morelle (D-NY) and Rep. Rob Wittman (R-VA). The legislation would give individuals more control over the use of their identifying characteristics in digital replicas. It affirms that every person has a “property right in their own likeness and voice,” and do not expire upon a person’s death. The rights can be transferred to the heirs or designees for a period of 10 years after the individual’s death. It sets damages at $50,000 for each unauthorized violation by a personalized cloing service, or the actual damages suffered plus profits from the use. Damages are set at $5,000 per violation for unauthorized publication, performance, distribution or transmission of a digital voice replica or digital depiction, or the actual damages.

https://deadline.com/2024/01/ai-legislation-deepfakes-house-of-representatives-1235708983/


jan24

Illinois

Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

Some sponsors of the legislation, notably chief sponsor Rep. Jennifer Gong-Gershowitz, D-Glenview, have indicated interest in further regulating the use of artificial intelligence.

https://chicagocrusader.com/more-than-300-statutes-became-law-in-the-new-year/

dez23

Prohibition on book bans, right to sue for ‘deepfake porn’ among new laws taking effect Jan. 1

Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

https://www.nprillinois.org/illinois/2023-12-26/prohibition-on-book-bans-right-to-sue-for-deepfake-porn-among-new-laws-taking-effect-jan-1

Out23 NOVA IORQUE NOVA LEi

New York Bans Deepfake Revenge Porn Distribution as AI Use Grows

New York Gov. Kathy Hochul (D) on Friday signed into law legislation banning the dissemination of pornographic images made with artificial intelligence without the consent of the subject.

https://news.bloomberglaw.com/in-house-counsel/n-y-outlaws-unlawful-publication-of-deepfake-revenge-porn

https://hudsonvalleyone.com/2023/10/15/deepfake-porn-in-new-york-state-means-jail-time/



out23 PROPOSTA

Bill would ban 'deepfake' pornography in Wisconsin


https://eu.jsonline.com/story/news/politics/2023/10/02/proposed-legislation-targets-deepfake-pornography/71033726007/

saet23 NI Bill

Assemblymember Amy Paulin’s (D-Scarsdale) legislation, which makes the nonconsensual use of “deepfake” images disseminated in online communities a criminal offense, has been signed into law by Governor Hochul.

“Deepfakes” are fake or altered images or videos created through the use of artificial intelligence. Many of these images and videos map a face onto a pornographic image or video. Some create a pornographic image or video out of a still photograph. These pornographic images and films are sometimes posted online without the consent of those in them – often with devastating consequences to those portrayed in the images.

https://talkofthesound.com/2023/09/25/amy-paulin-dissemination-of-deepfake-images-now-a-crime-in-new-york/

Clarke told ABC News that her DEEPFAKES Accountability Act would provide prosecutors, regulators and particularly victims with resources, like detection technology, that Clarke believes they need to stand up against the threat posed by nefarious deepfakes.

https://abcnews.go.com/Politics/bill-criminalize-extremely-harmful-online-deepfakes/story?id=103286802~



set23

Multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence) are referred to as deepfakes.

'Contextualizing Deepfake Threats to Organizations' PDF (arquivo)

https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

set23

Virginia revenge porn law updated to include deepfakes


https://eu.usatoday.com/videos/tech/2023/09/12/virginia-revenge-porn-law-updated-include-deepfakes/1637140001/

 

================

set23

Artificial intelligence’s ability to generate deepfake content that easily fools humans poses a genuine threat to financial markets, the head of the Securities and Exchange Commission warned.

https://news.bloomberglaw.com/artificial-intelligence/deepfakes-pose-real-risk-to-financial-markets-secs-gensler?source=newsletter&item=body-link&region=text-section