Tuesday, July 18, 2023

Atrioc helps remove 200k deepfakes after paying for images of female streamers

 jul23

Back in January 2023, Atrioc issued a tearful apology on stream after he accidentally revealed on stream that he had paid for deepfake images of female streamers.

He swiftly stepped down from his position in OFFBRAND, the agency he started alongside Ludwig, and announced a break from streaming.

Atrioc uploaded an update video on July 17, 2023, revealing some of the progress he’s done to help remove the controversial content.

Atrioc removes deepfakes of Amouranth & more

In the video, Atrioc revealed that he has been working with a company to automatically issue DMCAs to remove deepfake content across the internet.

With an initial budget of $100,000 and a goal to remove 100k pieces of deepfake content, Ewing revealed that he and his team blew through both goals.

“We didn’t hit that number, we smashed it. I’m super proud to say that as of July 9, 2023, we got 193,000 things taken down,” he said. “We did it with $122,000.”

https://www.dexerto.com/entertainment/atrioc-helps-remove-200k-deepfakes-after-paying-for-images-of-female-streamers-2214777/

Thursday, July 6, 2023

Liberdade de expressão

dez23

Imran Khan—Pakistan’s Jailed Ex-Leader—Uses AI Deepfake To Address Online Election Rally
Siladitya Ray
Forbes Staff

Dec 18, 2023,07:50am EST
Updated Dec 18, 2023, 07:50am EST

Former Pakistani Prime Minister Imran Khan, who is serving a three-year prison sentence, used AI-generated voice and video in a clip to campaign for his party ahead of the country’s upcoming general election, spotlighting the potential use of AI and deepfakes as major polls are scheduled in the U.S., India, European Union, Russia, Taiwan and beyond in 2024.

Khan’s party Pakistan Tehreek-e-Insaf (PTI) held an online campaign rally featured an AI-generated video of the former leader addressing his supporters and urging them to vote in large numbers for his party.

The video clip, which is about four minutes long, features an AI-generated voice resembling Khan’s delivering the speech and briefly includes an AI-generated deep fake video of the jailed leader sitting in front of a Pakistani flag.

In the video, Khan tells his supporters that his party has been barred from holding public rallies and talks about his party members being targeted, kidnapped and harassed.

PTI social media lead Jibran Ilyas said the content of the video is based on notes provided by Khan from prison, adding that the AI-generated voice feels like a “65-70%” match of the former Prime Minister.
The nearly five-hour livestream of the virtual rally has clocked up more than 1.5 million views on YouTube, although Khan’s AI-generated clip is only around four and half minutes long.

https://www.forbes.com/sites/siladityaray/2023/12/18/imran-khan-pakistans-jailed-ex-leader-uses-ai-deepfake-to-address-online-election-rally/?sh=6e5c48e55903


jul23

Not everything disgusting violates rights, and only things that violate rights should be treated as crimes — or even actionable torts.

A 1996 Joe Klein novel and 1998 film, “Primary Colors,” featured characters who were, recognizably, Bill and Hillary Clinton and members of the Clinton inner circle. They’re portrayed as engaging in actions that may or may not have actually happened in real life, some of which arguably, to grab a Supreme Court ruling expression, “appeal to a prurient interest.”

Librarian Daria Carter-Clark, who had good reason to believe that one of the characters portrayed as having engaged in a sexual fling with the Bill Clinton character was based on her, sued for libel. She lost. Romans-a-clef — works in which real-life people and events are given fictional treatment — enjoy the same constitutional protections as other fiction.

And that’s exactly how it should be.

https://restorationnewsmedia.com/articles/columns-butnercreedmoor/deepfake-porn-is-creepy-disgusting-and-protected-speech/

 ab22


Despite the immense threat of deepfakes, there are many, many limitations to legislating them, especially informational deepfakes. Such limitations include Section 230 of the Communications Decency Act, the First Amendment, copyright laws such as Fair Use laws, and nonconsensual pornography laws.

Section 230 is the reason why we can communicate freely on the internet, ranging from silly memes about storming Area 51, to actual plans about storming other important governmental buildings. It states that “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” Basically, Twitter, for example, cannot get sued because of what people are saying on Twitter. Individual users can potentially be punished for libel, slander, and more, but the platform itself is not at risk. This also means that the government cannot force websites to ban specific forms of speech or expression, leaving the regulation of speech on the platform up to the owners of the platform. This is great for free speech and enables all kinds of wonderful expression on the internet. 

However, this does present some issues when it comes to preventing the spread of nonconsensual pornography, deepfakes, and general misinformation. Even though the original poster can be punished for what they post (which is quite difficult and exceedingly rare), it is impossible to prevent others from reposting the same thing, enabling it to spread like wildfire. Despite the law’s pitfalls, amending Section 230 might not only restrict free expression on the internet but would also fail to be a complete solution to this problem. After all, posting deepfakes or misinformation is not illegal.

Much like Section 230, the First Amendment is an essential part of American democracy that enables and protects free speech and expression. But it also protects those who post pornographic or informational deepfakes. In some interpretations, the creation of deepfakes is an act of expression. In Hustler Magazine, Inc. v. Falwell, the Supreme Court rejected an emotional distress claim about an article accusing a minister of incest as “additional proof of falsity with actual malice was necessary ‘to give adequate’ breathing space to the freedoms protected by the First Amendment when the speech involves public figures or matters of public concern.” Once again, this court precedent is beneficial in the fact that it protects free speech, but it makes it difficult to legislate deepfakes, particularly informational deepfakes. 

Another common defense of information deepfakes is that they are not defamatory, but instead, are parodies or satirical. They are thus protected under the First Amendment. And in many instances, this is true; what else could a “Donald Trump And Barack Obama Singing Barbie Girl By Aqua” video be labeled? In a world full of memes in every different form about everything, how can we differentiate?

https://www.davispoliticalreview.com/article/deepfakes-and-american-law

jul23

Los Angeles-based video editor and political satirist Justin T. Brown has found himself at the center of a contentious debate thanks to his AI-generated images that portray prominent politicians such as Donald Trump, Barack Obama, and Joe Biden engaged in fictional infidelities.

His provocative series, christened “AI will revolutionize the blackmail industry,” quickly came under fire, leading to Brown’s banishment from the Midjourney AI art platform, which he used to generate the pictures.

Brown said the images were envisioned as a stark warning about the potential misuse of artificial intelligence.

https://finance.yahoo.com/news/political-satirist-slammed-creating-deepfakes-120103536.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAHFVDjMw8auK-p7Cz-o9pfgphkACHGyxpSL2oH3LZFHknvARRstLWDkJ2cSZ4rXvelzrYQuSakE-3z7iQQjzkMmienY7LfR8AvCQkt5p_XMUd1MayU1yDy-1ukpkNuZZaeoKyMvoJkjhtG2AckFPousHDLjvLcBYdBoFvtmd82wZ