Deepfakes Barely Impacted 2024 Elections Because They Aren’t Very Good, Research Finds
What constitutes a Deep Fake? The blurry line between legitimate processing and manipulation under the EU AI Act
A bipartisan bill to combat the spread of AI-generated, or deepfake, revenge pornography online unanimously passed the Senate Tuesday.
The TAKE IT DOWN Act, co-authored by Sens. Ted Cruz, R-Texas, and Amy Klobuchar, D-Minn., would make it unlawful to knowingly publish non-consensual intimate imagery, including deepfake imagery, that depict nude and explicit content in interstate commerce.
Georgia lawmakers are pondering a future in which AI bots must disclose their non-human status and deepfakes that sow confusion come with severe criminal penalties.
Why it matters: The Georgia Senate Study Committee on Artificial Intelligence's report, released Tuesday, offers glimpses into lawmakers' views on the tech and how much (or little) they plan to regulate the coming wave of bots.
Context: For the past seven months, the committee has heard testimony from academics, business leaders and policy experts about AI's effect on industries such as agriculture, entertainment, government and transparency.
The big picture: The study committee stops short of proposing specific legislation, but generally recommends that future laws would "support AI regulation without stifling innovation."
Authors
Keywords
First Amendment; constitutional law; artificial intelligence; deepfakes; political campaigns; political advertisements; A.I.
Abstract
In recent years, artificial intelligence (AI) technology has developed rapidly. Accompanying this advancement in sophistication and accessibility are various societal benefits and risks. For example, political campaigns and political action committees have begun to use AI in advertisements to generate deepfakes of opposing candidates to influence voters. Deepfakes of political candidates interfere with voters’ ability to discern falsity from reality and make informed decisions at the ballot box. As a result, these deepfakes pose a threat to the integrity of elections and the existence of democracy. Despite the dangers of deepfakes, regulating false political speech raises significant First Amendment questions.
This Note considers whether the Protect Elections from Deceptive AI Act, a proposed federal ban of AI-generated deepfakes portraying federal candidates in political advertisements, is constitutional. This Note concludes that the bill is constitutional under the First Amendment and that less speech restrictive alternatives fail to address the risks of deepfakes. Finally, this Note suggests revisions to narrow the bill’s application and ensure its apolitical enforcement.
https://ir.lawnet.fordham.edu/flr/vol93/iss1/7/
Deepfakes and Artificial Intelligence: A New Legal Challenge at European and National Levels?
https://heinonline.org/HOL/LandingPage?handle=hein.journals/lgveifn2024&div=16&id=&page=
set24
Half of U.S. states seek to crack down on AI in elections
As the 2024 election cycle ramps up, at least 26 states have passed or are considering bills
Why it matters: The review lays bare a messy patchwork of rules around the use of genAI in politics,
Governor Newsom signs bills to combat deepfake election content
SACRAMENTO, Calif. (AP) — California lawmakers approved a host of proposals this week aiming to regulate the artificial intelligence industry, combat deepfakes and protect workers from exploitation by the rapidly evolving technology.
The California Legislature, which is controlled by Democrats, is voting on hundreds of bills during its final week of the session to send to Gov. Gavin Newsom’s desk. Their deadline is Saturday.
Is a State AI Patchwork Next? AI Legislation at a State Level in 2024
19 Aug 2024
Legislation is finally starting to catch up to AI, with a new law allowing victims of non-consensual deepfake pornography to sue those responsible passing the US senate in unanimous fashion.
Deepfake technology has gotten a lot better since the boom in AI over the last few years. While some instances are fun and harmless, others have proven to be quite a problem, imitating celebrities to scam users or putting them in problematic situations.
Senate Majority Leader Chuck Schumer (D-N.Y.) said in a recent interview he will continue to push for the regulation of artificial intelligence (AI) in elections.
“Look, deepfakes are a serious, serious threat to this democracy. If people can no longer believe that the person they’re hearing speak is actually the person, this democracy has suffered — it will suffer — in ways that we have never seen before,” Schumer said last week in an interview with NBC News. “And if people just get turned off to democracy, Lord knows what will happen.”
With fewer than 100 days left until the November election, Schumer spoke to the outlet about the impact AI could have on the election process. He said he hopes to bring more legislation about AI to the Senate floor in the coming months.
States are rapidly adopting laws to grapple with political deepfakes in lieu of comprehensive federal regulation of manipulated media related to elections, according to a new report from the Brennan Center for Justice.
Nineteen states passed laws regulating deepfakes in elections, and 26 others considered related bills. But an NBC News review of the laws and a new analysis from the Brennan Center, a nonpartisan law and policy institute affiliated with New York University School of Law, finds that most states’ deepfake laws are so broad that they would face tough court challenges, while a few are so narrow that they leave plenty of options for bad actors to use the technology to deceive voters.
“It’s actually quite incredible how many of these laws have passed,” said Larry Norden, vice president of the Brennan Center’s Elections and Government Program and the author of the analysis released Tuesday.
The study found that states introduced 151 different bills this year that addressed deepfakes and other deceptive media meant to fool voters, about a quarter of all state AI laws introduced.
“That’s not something you generally see, and I think it is a reflection of how quickly this technology has evolved and how concerned legislators are that it could impact political campaigns,” he said.
A bipartisan group of U.S. senators has introduced legislation intended to counter the rise of deepfakes and protect creators from theft through generative artificial intelligence.
"Artificial intelligence has given bad actors the ability to create deepfakes of every individual, including those in the creative community, to imitate their likeness without their consent and profit off of counterfeit content," said U.S. Senator Marsha Blackburn (R.-Tenn.).
Federal legislation to combat deepfakes
Currently, there is no comprehensive enacted federal legislation in the United States that bans or even regulates deepfakes. However, the Identifying Outputs of Generative Adversarial Networks Act requires the director of the National Science Foundation to support research for the development and measurement of standards needed to generate GAN outputs and any other comparable techniques developed in the future.
Congress is considering additional legislation that, if passed, would regulate the creation, disclosure, and dissemination of deepfakes. Some of this legislation includes the Deepfake Report Act of 2019, which requires the Science and Technology directorate in the U.S. Department of Homeland Security to report at specified intervals on the state of digital content forgery technology; the DEEPFAKES Accountability Act, which aims to protect national security against the threats posed by deepfake technology and to provide legal recourse to victims of harmful deepfakes; the DEFIANCE Act of 2024, which would improve rights to relief for individuals affected by non-consensual activities involving intimate digital forgeries and for other purposes; and the Protecting Consumers from Deceptive AI Act, which requires the National Institute of Standards and Technology to establish task forces to facilitate and inform the development of technical standards and guidelines relating to the identification of content created by GenAI, to ensure that audio or visual content created or substantially modified by GenAI includes a disclosure acknowledging the GenAI origin of such content, and for other purposes.
States pursue deepfake legislation
MIDDLETON, Wis., June 27, 2024 /PRNewswire/ -- Deepfakes, an offshoot of Artificial Intelligence (AI), have become a pressing social and political issue that an increasing number of state lawmakers are trying to address through legislation. The number of bills in this space has grown from an average of 28 per year from 2019-2023, to 294 bills introduced to date in 2024.
That's why Ballotpedia, the nation's premiere source for unbiased information on elections, politics, and policy has created and launched a comprehensive AI Deepfake Legislation Tracker and Ballotpedia's State of Deepfake Legislation 2024 Annual Report, available here.
New legislation in Michigan would penalize people for using technology to create and distribute deepfake pornography.
"It happens a lot to younger women, girls, students as a kind of bullying technique," said state Rep. Penelope Tsernoglou. "It causes a lot of mental distress; in some cases, financial issues, reputational harm, and even some really severe cases could lead to self-harm and suicide."
On the way to deep fake democracy? Deep fakes in election campaigns in 2023Research
Open access
Published: 26 April 2024
Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.
Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.
Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.
Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.
https://www.wired.com/story/the-biggest-deepfake-porn-website-is-now-blocked-in-the-uk/
Indiana, Texas and Virginia in the past few years have enacted broad laws with penalties of up to a year in jail plus fines for anyone found guilty of sharing deepfake pornography. In Hawaii, the punishment is up to five years in prison.
Many states are combatting deepfake porn by adding to existing laws. Several, including Indiana, New York and Virginia, have enacted laws that add deepfakes to existing prohibitions on so-called revenge porn, or the posting of sexual images of a former partner without their consent. Georgia and Hawaii have targeted deepfake porn by updating their privacy laws.
Other states, such as Florida, South Dakota and Washington, have enacted laws that update the definition of child pornography to include deepfakes. Washington’s law, which was signed by Democratic Gov. Jay Inslee in March, makes it illegal to be in possession of a “fabricated depiction of an identifiable minor” engaging in a sexually explicit act — a crime punishable by up to a year in jail.
https://missouriindependent.com/2024/04/16/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/
The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.
Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.
It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.
And if the image is then shared more widely, they could face jail.
A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.
https://www.bbc.com/news/uk-68823042
It seems like it took a while, but one of the Missouri bills criminalizing artificial intelligence deepfakes made it through the House and it’s on to the Senate. About time.
I wrote about H.B. 2628 weeks ago, and questioned whether its financial and criminal penalties — its teeth — were strong enough.
This bill focuses on those who create false political communication, like when a robocall on Election Day featured a fake President Joe Biden telling people to stay away from the New Hampshire polls. A second Missouri House bill, H.B. 2573, also tackles deepfakes, but focuses on creating intimate digital depictions of people — porn — without their consent.
It is still winding its way through the House. That bill, called the Taylor Swift Act, likely will get even more attention because of its celebrity name and lurid photo and video clones.
But H.B. 2628 may be even more important because digital fakers have the potential to change the course of an election and, dare I say, democracy itself. It passed, overwhelmingly but not unanimously — 133 to 5, with 24 absent or voting present. It was sent on to the Senate floor where it had its first reading.
https://ca.finance.yahoo.com/news/missouri-anti-deepfake-legislation-good-100700062.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ
MAR24 CANADA
Proposed amendments to the Canada Elections Act
Backgrounder
In Canada, the strength and resilience of our democracy is enhanced by a long tradition of regular evaluation and improvements to the Canada Elections Act (CEA). The CEA is the fundamental legislative framework that regulates Canada’s electoral process. It is independently administered by the Chief Electoral Officer and Elections Canada, with compliance and enforcement by the Commissioner of Canada Elections. The CEA is renowned for trailblazing political financing rules, strict spending limits, and robust reporting requirements intended to further transparency, fairness, and participation in Canada’s federal elections.
Recent experiences and lessons from the 2019 and 2021 general elections highlighted opportunities to further remove barriers to voting and encourage voter participation, protect personal information, and strengthen electoral safeguards. The amendments to the CEA would advance these key priorities, reinforcing trust in federal elections, its participants, and its results.
https://www.canada.ca/en/democratic-institutions/news/2024/03/proposed-amendments-to-the-canada-elections-act.html
The New Hampshire state House advanced a bill Thursday that would require political ads that use deceptive artificial intelligence (AI) disclose use of the technology, adding to growing momentum in states to add AI regulations for election protection.
The bill passed without debate in the state House and will advance to the state Senate.
The bill advanced after New Hampshire voters received robocalls in January, ahead of the state’s primary elections, that included an AI-generated voice depicting President Biden. Steve Kramer, a veteran Democratic operative, admitted to being behind the fake robocalls and said he did so to draw attention to the dangers of AI in politics, NBC News reported in February.
https://thehill.com/policy/technology/4563917-new-hampshire-house-passes-ai-election-rules-after-biden-deepfake/
A new Washington state law will make it illegal to share fake pornography that appears to depict real people having sex.
Why it matters: Advancements in artificial intelligence have made it easy to use a single photograph to impose someone's features on realistic-looking "deepfake" porn.
- Before now, however, state law hasn't explicitly banned these kinds of digitally manipulated images.
Zoom in: The new Washington law, which Gov. Jay Inslee signed last week, will make it a gross misdemeanor to knowingly share fabricated intimate images of people without their consent.
- People who create and share deepfake pornographic images of minors can be charged with felonies. So can those who share deepfake porn of adults more than once.
- Victims will also be able to file civil lawsuits seeking damages.
Indiana, Utah, and New Mexico targeting AI in elections
Half of the US population is now covered under state bans on nonconsensual explicit images made with artificial intelligence as part of a broader effort against AI-enabled abuses amid congressional inaction.
Washington Gov. Jay Inslee (D) on March 14 signed legislation (HB 1999) that allows adult victims to sue the creators of such content used with the emerging technology.
That followed Indiana Gov. Eric Holcomb (R) signing into law a similar bill (HB 1047) on March 12 that includes penalties such as misdemeanor charges for a first offense. Adult victims do not have a private right to action under the measure.
The laws join an emerging patchwork of state-level restrictions on the use of artificial intelligence as federal lawmakers continue mulling their own approach to potential abuses by the technology. Ten states had such laws in place at the beginning of 2024: California, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, Virginia, Florida, and Georgia.
|
The Georgia state House voted Thursday to crack down on deepfake artificial intelligence (AI) videos ahead of this year’s elections.
The House voted 148-22 to approve the legislation, which attempts to stop the spread of misinformation from deceptive video impersonating candidates.
The legislation, H.B. 986, would make it a felony to publish a deepfake within 90 days of an election with the intention of misleading or confusing voters about a candidate or their chance of being elected.
The bill would allow the attorney general to have jurisdiction over the crimes and allow the state election board to publish the findings of investigations.
One of the sponsors of the bill, state Rep. Brad Thomas (R), celebrated the vote on social media.
“I am thrilled to inform you that House Bill 986 has successfully passed the House! This is a significant step towards upholding the integrity and impartiality of our electoral process by making the use of AI to interfere with elections a criminal offense,” Thomas posted on X, the platform formerly known as Twitter.^A proposal to require public disclosure whenever a political campaign in the state uses false information generated by artificial intelligence in a campaign advertisement gained approval from the New Mexico House of Representatives on Monday night.
Large tech platforms including TikTok, X and Facebook will soon have to identify AI-generated content in order to protect the upcoming European election from disinformation.
"We know that this electoral period that's opening up in the European Union is going to be targeted either via hybrid attacks or foreign interference of all kinds," Internal Market Commissioner Thierry Breton told European lawmakers in Strasbourg on Wednesday. "We can't have half-baked measures."
jan24
A bipartisan group of three senators is looking to give victims of sexually explicit deepfake images a way to hold their creators and distributors responsible.
Sens. Dick Durbin, D-Ill.; Lindsey Graham, R-S.C.; and Josh Hawley, R-Mo., plan to introduce the Disrupt Explicit Forged Images and Non-Consensual Edits Act on Tuesday, a day ahead of a Senate Judiciary Committee hearing on internet safety with CEOs from Meta, X, Snap and other companies. Durbin chairs the panel, while Graham is the committee’s top Republican.
Victims would be able to sue people involved in the creation and distribution of such images if the person knew or recklessly disregarded that the victim did not consent to the material. The bill would classify such material as a “digital forgery” and create a 10-year statute of limitations.
https://www.nbcnews.com/tech/tech-news/deepfake-bill-open-door-victims-sue-creators-rcna136434
jan24
South Korea
South Korea's special parliamentary committee on Tuesday (Jan 30) passed a revision to the Public Official Election Act which called for a ban on political campaign videos that use AI-generated deepfakes in the election season.
https://www.wionews.com/world/south-korea-imposes-90-day-ban-on-deepfake-political-campaign-videos-685152
jan24
As artificial intelligence starts to reshape society in ways predictable and not, some of Colorado’s highest-profile federal lawmakers are trying to establish guardrails without shutting down the technology altogether.
U.S. Rep. Ken Buck, a Windsor Republican, is cosponsoring legislation with California Democrat Ted Lieu to create a national commission focused on regulating the technology and another bill to keep AI from unilaterally firing nuclear weapons.
Sen. Michael Bennet, a Democrat, has publicly urged the leader of his caucus, Majority Leader Chuck Schumer, to carefully consider the path forward on regulating AI — while warning about the lessons learned from social media’s organic development. Sen. John Hickenlooper, also a Democrat, chaired a subcommittee hearing last September on the matter, too.
https://www.denverpost.com/2024/01/28/artificial-intelligence-congress-regulation-colorado-michael-bennet-ken-buck-elections-deepfakes/
jan24
US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were viewed millions of times online.
The images were posted on social media sites, including X and Telegram.
US Representative Joe Morelle called the spread of the pictures "appalling".
In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.
It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."
While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.
The name "Taylor Swift" is no longer searchable on X, alongside terms such as "Taylor Swift AI" and "Taylor AI".
https://www.bbc.com/news/technology-68110476
jan24 Daily MAil
The answer
lies largely in the lack of laws to prosecute those who make such content.
There is
currently no federal legislation against the conduct and only six states – New
York, Minnesota, Texas, Hawaii, Virginia and Georgia – have
passed legislation which criminalizes it.
In Texas, a
bill was enacted in September 2023 which made it an offense to create or share
deepfake images without permission which 'depict the person with the person's
intimate parts exposed or engaged in sexual conduct'.
The offense is
a Class A misdemeanor and punishments include up to a year in prison and fines
up to $4000.
In Minnesota,
the crime can carry a three-year sentence and fines up to $5,000.
Several of
these laws were introduced following earlier legislation which outlawed the use
of deepfakes to influence an election, such as through the creation of fake
images or videos which portray a politician or public official.
A handful of
other states, including California and Illinois, don't have laws against the act
but instead allow deepfake victims to sue perpetrators. Critics have said this
doesn't go far enough and that, in many cases, the creator is unknown.
At the federal
level, Joe Biden signed an executive order in
October which called for a ban on the use of generative AI to make child abuse
images or nonconsensual 'intimate images' of real people. But this was purely
symbolic and does not create a means to punish makers.
The finding
that 415,000 deepfake images were posted online last year was made by Genevieve
Oh, a researcher who analyzed the top ten websites which host such content.
Oh also
found 143,000
deepfake videos were uploaded in 2023 – more than during the
previous six years combined. The videos, published across 40 different websites
which host fake videos, were viewed more than 4.2 billion times.
Outside of
states where laws which criminalize the conduct exist, victims and prosecutors
must rely on existing legislation which can be used to charge offenders.
These include
laws around cyberbullying, extortion and harassment. Victims who are
blackmailed or subject to repeated abuse can attempt to use these laws against
perpetrators who weaponize deepfake images.
But they do
not prohibit the fundamental act of creating a hyper-realistic, explicit photo
of a person, then sharing it with the world, without their consent.
A 14-year-old
girl from New Jersey who was depicted in a pornographic deepfake image created
by one of her male classmates is now leading
a campaign to have a federal law passed.
Francesca Mani
and her mom, Dorota Mani, recently met with members of Congress at Capitol Hill
to push for laws targeting perpetrators.
https://www.dailymail.co.uk/news/article-13007753/deepfake-porn-laws-internet.html
jan24
COLUMBUS, Ohio (WCMH) – As AI becomes more popular, there is a rising concern about “deepfakes.” A deepfake is a “convincing image, video or audio hoax,” created using AI that impersonates someone or makes up an event that never happened.
At the Ohio Statehouse, Representatives Brett Hillyer (R-Uhrichsville) and Adam Mathews (R-Lebanon) just introduced House Bill 367 to address issues that may arise with the new technology.
Ohio mother faked child’s cancer battle for money, investigators say
“In my day to day I see how important people’s name image and likeness and the copyright there is within it,” Mathews said.
Mathews said the intent of the bill is to make sure everyone, not just high-profile people, is protected. Right now, Mathews said functionally, the only way one can go after someone for using their name, image or likeness (NIL) is if they’re using it to say you endorse a product or to defraud someone.
“I wanted to put every single Ohioan at the same level as our most famous residents from Joe Burrow to Ryan Day,” Mathews said. “There are a lot of things people can do with your name, image and likeness that could be harmful to your psyche or reputation.”
The bill makes it so any Ohioan can go after someone who uses their NIL for a deepfake. With fines as high as $15,000 dollars for the creation of a malicious deepfake; the court can also order that a deepfake be taken down if it is malicious.
https://news.yahoo.com/bill-introduced-statehouse-protect-ohioans-230000251.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLnB0Lw&guce_referrer_sig=AQAAAENj6zcIxbhxJAo2pJ_AmmsXSVymCSWCxsInYs7yVnzFtgMmgXTbh3aCW4mrWfEnG8C_JQ_juc-EH5259FdOiw8tDTkLfe1UpxXxtl93u5IpvpnNv15CircmHtj6i1Rbz1b6mkqAkaYG6pZpGEMIVKs2KtScO62yGmLZOkvbJSmb
jan24
A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.
The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.
Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.
The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.
https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud
jan24
House lawmakers introduced legislation to try to curb the unauthorized use of deepfakes and voice clones.
The legislation, the No AI Fraud Act, is sponsored by Rep. Maria Salazar (R-FL), Rep. Madeleine Dean (D-PA), Rep. Nathaniel Moran (R-TX), Rep. Joe Morelle (D-NY) and Rep. Rob Wittman (R-VA). The legislation would give individuals more control over the use of their identifying characteristics in digital replicas. It affirms that every person has a “property right in their own likeness and voice,” and do not expire upon a person’s death. The rights can be transferred to the heirs or designees for a period of 10 years after the individual’s death. It sets damages at $50,000 for each unauthorized violation by a personalized cloing service, or the actual damages suffered plus profits from the use. Damages are set at $5,000 per violation for unauthorized publication, performance, distribution or transmission of a digital voice replica or digital depiction, or the actual damages.
https://deadline.com/2024/01/ai-legislation-deepfakes-house-of-representatives-1235708983/
jan24
Illinois
Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.
The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.
In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.
Some sponsors of the legislation, notably chief sponsor Rep. Jennifer Gong-Gershowitz, D-Glenview, have indicated interest in further regulating the use of artificial intelligence.
https://chicagocrusader.com/more-than-300-statutes-became-law-in-the-new-year/
dez23
Prohibition on book bans, right to sue for ‘deepfake porn’ among new laws taking effect Jan. 1
Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.
The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.
In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.
https://www.nprillinois.org/illinois/2023-12-26/prohibition-on-book-bans-right-to-sue-for-deepfake-porn-among-new-laws-taking-effect-jan-1
Out23 NOVA IORQUE NOVA LEi
New York Bans Deepfake Revenge Porn Distribution as AI Use Grows
New York Gov. Kathy Hochul (D) on Friday signed into law legislation banning the dissemination of pornographic images made with artificial intelligence without the consent of the subject.
https://news.bloomberglaw.com/in-house-counsel/n-y-outlaws-unlawful-publication-of-deepfake-revenge-porn
https://hudsonvalleyone.com/2023/10/15/deepfake-porn-in-new-york-state-means-jail-time/
out23 PROPOSTA
Bill would ban 'deepfake' pornography in Wisconsin
saet23 NI Bill
Assemblymember Amy Paulin’s (D-Scarsdale) legislation, which makes the nonconsensual use of “deepfake” images disseminated in online communities a criminal offense, has been signed into law by Governor Hochul.
“Deepfakes” are fake or altered images or videos created through the use of artificial intelligence. Many of these images and videos map a face onto a pornographic image or video. Some create a pornographic image or video out of a still photograph. These pornographic images and films are sometimes posted online without the consent of those in them – often with devastating consequences to those portrayed in the images.
https://talkofthesound.com/2023/09/25/amy-paulin-dissemination-of-deepfake-images-now-a-crime-in-new-york/
Clarke told ABC News that her DEEPFAKES Accountability Act would provide prosecutors, regulators and particularly victims with resources, like detection technology, that Clarke believes they need to stand up against the threat posed by nefarious deepfakes.
https://abcnews.go.com/Politics/bill-criminalize-extremely-harmful-online-deepfakes/story?id=103286802~
set23
Multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence) are referred to as deepfakes.
'Contextualizing Deepfake Threats to Organizations' PDF (arquivo)
set23
Virginia revenge porn law updated to include deepfakes
================
set23
Artificial intelligence’s ability to generate deepfake content that easily fools humans poses a genuine threat to financial markets, the head of the
https://news.bloomberglaw.com/artificial-intelligence/deepfakes-pose-real-risk-to-financial-markets-secs-gensler?source=newsletter&item=body-link®ion=text-section