Wednesday, September 13, 2023

ATUALIZAÇÃO DO TRABALHO sete23

apr24
On the way to deep fake democracy? Deep fakes in election campaigns in 2023Research
Open access
Published: 26 April 2024
https://link.springer.com/article/10.1057/s41304-024-00482-9

apr24

Two of the biggest deepfake pornography websites have now started blocking people trying to access them from the United Kingdom. The move comes days after the UK government announced plans for a new law that will make creating nonconsensual deepfakes a criminal offense.

Nonconsensual deepfake pornography websites and apps that “strip” clothes off of photos have been growing at an alarming rate—causing untold harm to the thousands of women they are used to target.

Clare McGlynn, a professor of law at Durham University, says the move is a “hugely significant moment” in the fight against deepfake abuse. “This ends the easy access and the normalization of deepfake sexual abuse material,” McGlynn tells WIRED.

Since deepfake technology first emerged in December 2017, it has consistently been used to create nonconsensual sexual images of women—swapping their faces into pornographic videos or allowing new “nude” images to be generated. As the technology has improved and become easier to access, hundreds of websites and apps have been created. Most recently, schoolchildren have been caught creating nudes of classmates.

https://www.wired.com/story/the-biggest-deepfake-porn-website-is-now-blocked-in-the-uk/


ap24

IndianaTexas and Virginia in the past few years have enacted broad laws with penalties of up to a year in jail plus fines for anyone found guilty of sharing deepfake pornography. In Hawaii, the punishment is up to five years in prison.

Many states are combatting deepfake porn by adding to existing laws. Several, including Indiana, New York and Virginia, have enacted laws that add deepfakes to existing prohibitions on so-called revenge porn, or the posting of sexual images of a former partner without their consent. Georgia and Hawaii have targeted deepfake porn by updating their privacy laws.

Other states, such as FloridaSouth Dakota and Washington, have enacted laws that update the definition of child pornography to include deepfakes. Washington’s law, which was signed by Democratic Gov. Jay Inslee in March, makes it illegal to be in possession of a “fabricated depiction of an identifiable minor” engaging in a sexually explicit act — a crime punishable by up to a year in jail.

https://missouriindependent.com/2024/04/16/states-race-to-restrict-deepfake-porn-as-it-becomes-easier-to-create/



APRReino Unido

The creation of sexually explicit "deepfake" images is to be made a criminal offence in England and Wales under a new law, the government says.

Under the legislation, anyone making explicit images of an adult without their consent will face a criminal record and unlimited fine.

It will apply regardless of whether the creator of an image intended to share it, the Ministry of Justice (MoJ) said.

And if the image is then shared more widely, they could face jail.

A deepfake is an image or video that has been digitally altered with the help of Artificial Intelligence (AI) to replace the face of one person with the face of another.

https://www.bbc.com/news/uk-68823042



apr24 MISSOURI

It seems like it took a while, but one of the Missouri bills criminalizing artificial intelligence deepfakes made it through the House and it’s on to the Senate. About time.

I wrote about H.B. 2628 weeks ago, and questioned whether its financial and criminal penalties — its teeth — were strong enough.

This bill focuses on those who create false political communication, like when a robocall on Election Day featured a fake President Joe Biden telling people to stay away from the New Hampshire polls. A second Missouri House bill, H.B. 2573, also tackles deepfakes, but focuses on creating intimate digital depictions of people — porn — without their consent.

It is still winding its way through the House. That bill, called the Taylor Swift Act, likely will get even more attention because of its celebrity name and lurid photo and video clones.

But H.B. 2628 may be even more important because digital fakers have the potential to change the course of an election and, dare I say, democracy itself. It passed, overwhelmingly but not unanimously — 133 to 5, with 24 absent or voting present. It was sent on to the Senate floor where it had its first reading.

https://ca.finance.yahoo.com/news/missouri-anti-deepfake-legislation-good-100700062.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ

MAR24 CANADA

Proposed amendments to the Canada Elections Act

Backgrounder

In Canada, the strength and resilience of our democracy is enhanced by a long tradition of regular evaluation and improvements to the Canada Elections Act (CEA). The CEA is the fundamental legislative framework that regulates Canada’s electoral process. It is independently administered by the Chief Electoral Officer and Elections Canada, with compliance and enforcement by the Commissioner of Canada Elections. The CEA is renowned for trailblazing political financing rules, strict spending limits, and robust reporting requirements intended to further transparency, fairness, and participation in Canada’s federal elections.

Recent experiences and lessons from the 2019 and 2021 general elections highlighted opportunities to further remove barriers to voting and encourage voter participation, protect personal information, and strengthen electoral safeguards. The amendments to the CEA would advance these key priorities, reinforcing trust in federal elections, its participants, and its results.

https://www.canada.ca/en/democratic-institutions/news/2024/03/proposed-amendments-to-the-canada-elections-act.html


mar24
New Hampshire

The New Hampshire state House advanced a bill Thursday that would require political ads that use deceptive artificial intelligence (AI) disclose use of the technology, adding to growing momentum in states to add AI regulations for election protection. 

The bill passed without debate in the state House and will advance to the state Senate.

The bill advanced after New Hampshire voters received robocalls in January, ahead of the state’s primary elections, that included an AI-generated voice depicting President Biden. Steve Kramer, a veteran Democratic operative, admitted to being behind the fake robocalls and said he did so to draw attention to the dangers of AI in politics, NBC News reported in February.

https://thehill.com/policy/technology/4563917-new-hampshire-house-passes-ai-election-rules-after-biden-deepfake/


mar24

A new Washington state law will make it illegal to share fake pornography that appears to depict real people having sex.

Why it matters: Advancements in artificial intelligence have made it easy to use a single photograph to impose someone's features on realistic-looking "deepfake" porn.

  • Before now, however, state law hasn't explicitly banned these kinds of digitally manipulated images.

Zoom in: The new Washington law, which Gov. Jay Inslee signed last week, will make it a gross misdemeanor to knowingly share fabricated intimate images of people without their consent.

  • People who create and share deepfake pornographic images of minors can be charged with felonies. So can those who share deepfake porn of adults more than once.
  • Victims will also be able to file civil lawsuits seeking damages.
What they're saying: "With this law, survivors of intimate and fabricated image-based violence have a path to justice," said state Rep. Tina Orwall (D-Des Moines), who sponsored the legislation, in a news release.

https://www.axios.com/local/seattle/2024/03/19/new-washington-law-criminalizes-deepfake-porn

mar24Washington, Indiana ban AI porn images of real people
Indiana, Utah, and New Mexico targeting AI in elections


Half of the US population is now covered under state bans on nonconsensual explicit images made with artificial intelligence as part of a broader effort against AI-enabled abuses amid congressional inaction.

Washington Gov. Jay Inslee (D) on March 14 signed legislation (HB 1999) that allows adult victims to sue the creators of such content used with the emerging technology.

That followed Indiana Gov. Eric Holcomb (R) signing into law a similar bill (HB 1047) on March 12 that includes penalties such as misdemeanor charges for a first offense. Adult victims do not have a private right to action under the measure.

The laws join an emerging patchwork of state-level restrictions on the use of artificial intelligence as federal lawmakers continue mulling their own approach to potential abuses by the technology. Ten states had such laws in place at the beginning of 2024: California, Hawaii, Illinois, Minnesota, New York, South Dakota, Texas, Virginia, Florida, and Georgia.

https://news.bgov.com/states-of-play/more-states-ban-ai-deepfakes-months-after-taylor-swift-uproar?source=newsletter&item=body-link&region=text-section

mar24

The European Union is enacting the most comprehensive guardrails on the fast-developing world of artificial intelligence after the bloc’s parliament passed the AI Act on Wednesday.

The landmark set of rules, in the absence of any legislation from the US, could set the tone for how AI is governed in the Western world. But the legislation’s passage comes as companies worry the law goes too far and digital watchdogs say it doesn’t go far enough.

“Europe is now a global standard-setter in trustworthy AI,” Internal Market Commissioner Thierry Breton said in a statement.

Thierry Breton, internal market commissioner for the European Union
Photographer: Angel Garcia/Bloomberg

The AI Act becomes law after member states sign off, which is usually a formality, and once it’s published in the EU’s Official Journal.

The new law is intended to address worries about bias, privacy and other risks from the rapidly evolving technology. The legislation would ban the use of AI for detecting emotions in workplaces and schools, as well as limit how it can be used in high-stakes situations like sorting job applications. It would also place the first restrictions on generative AI tools, which captured the world’s attention last year with the popularity of ChatGPT.

However, the bill has sparked concerns in the three months since officials reached a breakthrough provisional agreement after a marathon negotiation session that lasted more than 35 hours.

As talks reached the final stretch last year, the French and German governments pushed back against some of the strictest ideas for regulating generative AI, arguing that the rules will hurt European startups like France’s Mistral AI and Germany’s Aleph Alpha GmbH. Civil society groups like Corporate Europe Observatory (CEO) raised concerns about the influence that Big Tech and European companies had in shaping the final text.

“This one-sided influence meant that ‘general purpose AI,’ was largely exempted from the rules and only required to comply with a few transparency obligations,” watchdogs including CEO and LobbyControl wrote in a statement, referring to AI systems capable of performing a wider range of tasks.

A recent announcement that Mistral had partnered with Microsoft Corp.raised concerns from some lawmakers. Kai Zenner, a parliamentary assistant key in the writing of the act and now an adviser to the United Nations on AI policy, wrote that the move was strategically smart and “maybe even necessary” for the French startup, but said “the EU legislator got played again.”

Brando Benifei, a lawmaker and leading author of the act, said the results speaks for themselves. “The legislation is clearly defining the needs for safety of most powerful models with clear criteria, and so it’s clear that we stood on our feet,” he said Wednesday in a news conference.

US and European companies have also raised concerns that the law will limit the bloc’s competitiveness.

“With a limited digital tech industry and relatively low investment compared with industry giants like the United States and China, the EU’s ambitions of technological sovereignty and AI leadership face considerable hurdles,” wrote Raluca Csernatoni, a research fellow at the Carnegie Europe think tank.

Lawmakers during Tuesday’s debate acknowledged that there is still significant work ahead. The EU is in the process of setting up its AI Office, an independent body within the European Commission. In practice, the office will be the key enforcer, with the ability to request information from companies developing generative AI and possibly ban a system from operating in the bloc.

“The rules we have passed in this mandate to govern the digital domain — not just the AI Act — are truly historical, pioneering,” said Dragos Tudorache, a European Parliament member who was also one of the leading authors. “But making them all work in harmony with the desired eff

https://news.bloomberglaw.com/artificial-intelligence/eu-embraces-new-ai-rules-despite-doubts-it-got-the-right-balance?source=breaking-news&item=headline&region=featured-story&login=blaw


fev24
GOERGIA

The Georgia state House voted Thursday to crack down on deepfake artificial intelligence (AI) videos ahead of this year’s elections.

The House voted 148-22 to approve the legislation, which attempts to stop the spread of misinformation from deceptive video impersonating candidates.

The legislation, H.B. 986, would make it a felony to publish a deepfake within 90 days of an election with the intention of misleading or confusing voters about a candidate or their chance of being elected.

The bill would allow the attorney general to have jurisdiction over the crimes and allow the state election board to publish the findings of investigations.

One of the sponsors of the bill, state Rep. Brad Thomas (R), celebrated the vote on social media.

“I am thrilled to inform you that House Bill 986 has successfully passed the House! This is a significant step towards upholding the integrity and impartiality of our electoral process by making the use of AI to interfere with elections a criminal offense,” Thomas posted on X, the platform formerly known as Twitter.^
https://thehill.com/homenews/state-watch/4485098-georgia-house-approves-crackdown-on-deepfake-ai-videos-before-elections/

fev24
Washington
In Washington, a new bill, HB 1999, is making waves in the fight against deepfakes. The legislation, introduced to address the alarming issue of sexually explicit content involving minors, aims to close legal loopholes and provide legal recourse for victims of deepfake abuse.
https://bnnbreaking.com/tech/new-bill-hb-1999-takes-a-stand-against-deepfakes-in-washington

fev24
NEW MEXICo

A proposal to require public disclosure whenever a political campaign in the state uses false information generated by artificial intelligence in a campaign advertisement gained approval from the New Mexico House of Representatives on Monday night.

After about an hour of debate, the House voted 38-28 to pass House Bill 182, which would amend the state’s Campaign Reporting Act to require political campaigns to disclose whenever they use artificial intelligence in their ads, and would make it a crime to use artificially-generated ads to intentionally deceive voters.
https://sourcenm.com/2024/02/13/deepfake-disclosure-bill-passes-nm-house/


fev24

Large tech platforms including TikTok, X and Facebook will soon have to identify AI-generated content in order to protect the upcoming European election from disinformation.

"We know that this electoral period that's opening up in the European Union is going to be targeted either via hybrid attacks or foreign interference of all kinds," Internal Market Commissioner Thierry Breton told European lawmakers in Strasbourg on Wednesday. "We can't have half-baked measures."

Breton didn't say when exactly companies will be compelled to label manipulated content under the EU's content moderation law, the Digital Services Act (DSA). Breton oversees the Commission branch enforcing the DSA on the largest European social media and video platforms, including Facebook, Instagram and YouTube.
https://www.politico.eu/article/eu-big-tech-help-deepfake-proof-election-2024/


jan24

A bipartisan group of three senators is looking to give victims of sexually explicit deepfake images a way to hold their creators and distributors responsible.

Sens. Dick Durbin, D-Ill.; Lindsey Graham, R-S.C.; and Josh Hawley, R-Mo., plan to introduce the Disrupt Explicit Forged Images and Non-Consensual Edits Act on Tuesday, a day ahead of a Senate Judiciary Committee hearing on internet safety with CEOs from Meta, X, Snap and other companies. Durbin chairs the panel, while Graham is the committee’s top Republican.

Victims would be able to sue people involved in the creation and distribution of such images if the person knew or recklessly disregarded that the victim did not consent to the material. The bill would classify such material as a “digital forgery” and create a 10-year statute of limitations. 

https://www.nbcnews.com/tech/tech-news/deepfake-bill-open-door-victims-sue-creators-rcna136434


jan24

South Korea

South Korea's special parliamentary committee on Tuesday (Jan 30) passed a revision to the Public Official Election Act which called for a ban on political campaign videos that use AI-generated deepfakes in the election season.

https://www.wionews.com/world/south-korea-imposes-90-day-ban-on-deepfake-political-campaign-videos-685152


jan24

As artificial intelligence starts to reshape society in ways predictable and not, some of Colorado’s highest-profile federal lawmakers are trying to establish guardrails without shutting down the technology altogether.

U.S. Rep. Ken Buck, a Windsor Republican, is cosponsoring legislation with California Democrat Ted Lieu to create a national commission focused on regulating the technology and another bill to keep AI from unilaterally firing nuclear weapons.

Sen. Michael Bennet, a Democrat, has publicly urged the leader of his caucus, Majority Leader Chuck Schumer, to carefully consider the path forward on regulating AI — while warning about the lessons learned from social media’s organic development. Sen. John Hickenlooper, also a Democrat, chaired a subcommittee hearing last September on the matter, too.

https://www.denverpost.com/2024/01/28/artificial-intelligence-congress-regulation-colorado-michael-bennet-ken-buck-elections-deepfakes/

jan24

US politicians have called for new laws to criminalise the creation of deepfake images, after explicit faked photos of Taylor Swift were viewed millions of times online.

The images were posted on social media sites, including X and Telegram.

US Representative Joe Morelle called the spread of the pictures "appalling".

In a statement, X said it was "actively removing" the images and taking "appropriate actions" against the accounts involved in spreading them.

It added: "We're closely monitoring the situation to ensure that any further violations are immediately addressed, and the content is removed."

While many of the images appear to have been removed at the time of publication, one photo of Swift was viewed a reported 47 million times before being taken down.

The name "Taylor Swift" is no longer searchable on X, alongside terms such as "Taylor Swift AI" and "Taylor AI".

https://www.bbc.com/news/technology-68110476



jan24 Daily MAil

The answer lies largely in the lack of laws to prosecute those who make such content.

There is currently no federal legislation against the conduct and only six states – New York, MinnesotaTexasHawaii, Virginia and Georgia – have passed legislation which criminalizes it.

In Texas, a bill was enacted in September 2023 which made it an offense to create or share deepfake images without permission which 'depict the person with the person's intimate parts exposed or engaged in sexual conduct'.

The offense is a Class A misdemeanor and punishments include up to a year in prison and fines up to $4000.

In Minnesota, the crime can carry a three-year sentence and fines up to $5,000.

Several of these laws were introduced following earlier legislation which outlawed the use of deepfakes to influence an election, such as through the creation of fake images or videos which portray a politician or public official.

A handful of other states, including California and Illinois, don't have laws against the act but instead allow deepfake victims to sue perpetrators. Critics have said this doesn't go far enough and that, in many cases, the creator is unknown.

 

At the federal level, Joe Biden signed an executive order in October which called for a ban on the use of generative AI to make child abuse images or nonconsensual 'intimate images' of real people. But this was purely symbolic and does not create a means to punish makers.

The finding that 415,000 deepfake images were posted online last year was made by Genevieve Oh, a researcher who analyzed the top ten websites which host such content.

Oh also found 143,000 deepfake videos were uploaded in 2023 – more than during the previous six years combined. The videos, published across 40 different websites which host fake videos, were viewed more than 4.2 billion times.

Outside of states where laws which criminalize the conduct exist, victims and prosecutors must rely on existing legislation which can be used to charge offenders.

These include laws around cyberbullying, extortion and harassment. Victims who are blackmailed or subject to repeated abuse can attempt to use these laws against perpetrators who weaponize deepfake images.

But they do not prohibit the fundamental act of creating a hyper-realistic, explicit photo of a person, then sharing it with the world, without their consent.

A 14-year-old girl from New Jersey who was depicted in a pornographic deepfake image created by one of her male classmates is now leading a campaign to have a federal law passed.

Francesca Mani and her mom, Dorota Mani, recently met with members of Congress at Capitol Hill to push for laws targeting perpetrators.

https://www.dailymail.co.uk/news/article-13007753/deepfake-porn-laws-internet.html



jan24

COLUMBUS, Ohio (WCMH) – As AI becomes more popular, there is a rising concern about “deepfakes.” A deepfake is a “convincing image, video or audio hoax,” created using AI that impersonates someone or makes up an event that never happened.

At the Ohio Statehouse, Representatives Brett Hillyer (R-Uhrichsville) and Adam Mathews (R-Lebanon) just introduced House Bill 367 to address issues that may arise with the new technology.

Ohio mother faked child’s cancer battle for money, investigators say

“In my day to day I see how important people’s name image and likeness and the copyright there is within it,” Mathews said.

Mathews said the intent of the bill is to make sure everyone, not just high-profile people, is protected. Right now, Mathews said functionally, the only way one can go after someone for using their name, image or likeness (NIL) is if they’re using it to say you endorse a product or to defraud someone.

“I wanted to put every single Ohioan at the same level as our most famous residents from Joe Burrow to Ryan Day,” Mathews said. “There are a lot of things people can do with your name, image and likeness that could be harmful to your psyche or reputation.”

The bill makes it so any Ohioan can go after someone who uses their NIL for a deepfake. With fines as high as $15,000 dollars for the creation of a malicious deepfake; the court can also order that a deepfake be taken down if it is malicious.

https://news.yahoo.com/bill-introduced-statehouse-protect-ohioans-230000251.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLnB0Lw&guce_referrer_sig=AQAAAENj6zcIxbhxJAo2pJ_AmmsXSVymCSWCxsInYs7yVnzFtgMmgXTbh3aCW4mrWfEnG8C_JQ_juc-EH5259FdOiw8tDTkLfe1UpxXxtl93u5IpvpnNv15CircmHtj6i1Rbz1b6mkqAkaYG6pZpGEMIVKs2KtScO62yGmLZOkvbJSmb


jan24

A pair of U.S. House of Representative members have introduced a bill intended to restrict unauthorized fraudulent digital replicas of people.

The bulk of the motivation behind the legislation, based on the wording of the bill, is the protection of actors, people of notoriety and girls and women defamed through fraudulent porn made with their face template.

Curiously, the very real threat of using deepfakes to defraud just about everyone else in the nation is not mentioned. Those risks are growing and could result in uncountable financial damages as organizations rely on voice and face biometrics for ID verification.

The representatives, María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Penn.), do not mention the global singer/songwriter Taylor Swift in their press release, it cannot have escaped them that she’s been victimized, too.

https://www.biometricupdate.com/202401/us-lawmakers-attack-categories-of-deepfake-but-miss-everyday-fraud


jan24

House lawmakers introduced legislation to try to curb the unauthorized use of deepfakes and voice clones.

The legislation, the No AI Fraud Act, is sponsored by Rep. Maria Salazar (R-FL), Rep. Madeleine Dean (D-PA), Rep. Nathaniel Moran (R-TX), Rep. Joe Morelle (D-NY) and Rep. Rob Wittman (R-VA). The legislation would give individuals more control over the use of their identifying characteristics in digital replicas. It affirms that every person has a “property right in their own likeness and voice,” and do not expire upon a person’s death. The rights can be transferred to the heirs or designees for a period of 10 years after the individual’s death. It sets damages at $50,000 for each unauthorized violation by a personalized cloing service, or the actual damages suffered plus profits from the use. Damages are set at $5,000 per violation for unauthorized publication, performance, distribution or transmission of a digital voice replica or digital depiction, or the actual damages.

https://deadline.com/2024/01/ai-legislation-deepfakes-house-of-representatives-1235708983/


jan24

Illinois

Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

Some sponsors of the legislation, notably chief sponsor Rep. Jennifer Gong-Gershowitz, D-Glenview, have indicated interest in further regulating the use of artificial intelligence.

https://chicagocrusader.com/more-than-300-statutes-became-law-in-the-new-year/

dez23

Prohibition on book bans, right to sue for ‘deepfake porn’ among new laws taking effect Jan. 1

Lawmakers this spring approved a new protection for victims of “deepfake porn.” Starting in 2024, people who are falsely depicted in sexually explicit images or videos will be able to sue the creator of that material.

The law is an amendment to the state’s existing protections for victims of “revenge porn,” which went into effect in 2015.

In recent years, deepfakes – images and videos that falsely depict someone – have become more sophisticated with the advent of more readily available artificial intelligence tools. Women are disproportionately the subject of deepfake porn.

https://www.nprillinois.org/illinois/2023-12-26/prohibition-on-book-bans-right-to-sue-for-deepfake-porn-among-new-laws-taking-effect-jan-1

Out23 NOVA IORQUE NOVA LEi

New York Bans Deepfake Revenge Porn Distribution as AI Use Grows

New York Gov. Kathy Hochul (D) on Friday signed into law legislation banning the dissemination of pornographic images made with artificial intelligence without the consent of the subject.

https://news.bloomberglaw.com/in-house-counsel/n-y-outlaws-unlawful-publication-of-deepfake-revenge-porn

https://hudsonvalleyone.com/2023/10/15/deepfake-porn-in-new-york-state-means-jail-time/



out23 PROPOSTA

Bill would ban 'deepfake' pornography in Wisconsin


https://eu.jsonline.com/story/news/politics/2023/10/02/proposed-legislation-targets-deepfake-pornography/71033726007/

saet23 NI Bill

Assemblymember Amy Paulin’s (D-Scarsdale) legislation, which makes the nonconsensual use of “deepfake” images disseminated in online communities a criminal offense, has been signed into law by Governor Hochul.

“Deepfakes” are fake or altered images or videos created through the use of artificial intelligence. Many of these images and videos map a face onto a pornographic image or video. Some create a pornographic image or video out of a still photograph. These pornographic images and films are sometimes posted online without the consent of those in them – often with devastating consequences to those portrayed in the images.

https://talkofthesound.com/2023/09/25/amy-paulin-dissemination-of-deepfake-images-now-a-crime-in-new-york/

Clarke told ABC News that her DEEPFAKES Accountability Act would provide prosecutors, regulators and particularly victims with resources, like detection technology, that Clarke believes they need to stand up against the threat posed by nefarious deepfakes.

https://abcnews.go.com/Politics/bill-criminalize-extremely-harmful-online-deepfakes/story?id=103286802~



set23

Multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence) are referred to as deepfakes.

'Contextualizing Deepfake Threats to Organizations' PDF (arquivo)

https://media.defense.gov/2023/Sep/12/2003298925/-1/-1/0/CSI-DEEPFAKE-THREATS.PDF

set23

Virginia revenge porn law updated to include deepfakes


https://eu.usatoday.com/videos/tech/2023/09/12/virginia-revenge-porn-law-updated-include-deepfakes/1637140001/

 

================

set23

Artificial intelligence’s ability to generate deepfake content that easily fools humans poses a genuine threat to financial markets, the head of the Securities and Exchange Commission warned.

https://news.bloomberglaw.com/artificial-intelligence/deepfakes-pose-real-risk-to-financial-markets-secs-gensler?source=newsletter&item=body-link&region=text-section