Monday, February 6, 2023

chatbox



mai23

Next year’s elections in Britain and the US could be marked by a wave of AI-powered disinformation, experts have warned, as generated images, text and deepfake videos go viral at the behest of swarms of AI-powered propaganda bots.

Sam Altman, CEO of the ChatGPT creator, OpenAI, told a congressional hearing in Washington this week that the models behind the latest generation of AI technology could manipulate users.

“The general ability of these models to manipulate and persuade, to provide one-on-one interactive disinformation is a significant area of concern,” he said.

“Regulation would be quite wise: people need to know if they’re talking to an AI, or if content that they’re looking at is generated or not. The ability to really model … to predict humans, I think is going to require a combination of companies doing the right thing, regulation and public education.”

The prime minister, Rishi Sunak, said on Thursday the UK would lead on limiting the dangers of AI. Concerns over the technology have soared after breakthroughs in generative AI, where tools like ChatGPT and Midjourney produce convincing text, images and even voice on command.

https://www.theguardian.com/technology/2023/may/20/elections-in-uk-and-us-at-risk-from-ai-driven-disinformation-say-experts



mai23

As generative AI developers such as ChatGPTDall-E2, and AlphaCode barrel ahead at a breakneck pace, keeping the technology from hallucinating and spewing erroneous or offensive responses is nearly impossible.

Especially as AI tools get better by the day at mimicking natural language, it will soon be impossible to discern fake results from real ones, prompting companies to set up “guardrails” against the worst outcomes, whether they be accidental or intentional efforts by bad actors.

AI industry experts speaking at the MIT Technology Review's EmTech Digital conference this week weighed in on how generative AI companies are dealing with a variety of ethical and practical hurdles as even as they push ahead on developing the next generation of the technology
https://www.computerworld.com/article/3695508/ai-deep-fakes-mistakes-and-biases-may-be-unavoidable-but-controllable.html

ab23

Even worse, chatbots like ChatGPT are starting to generate realistic scripts with adaptive real-time responses. By combining these technologies with voice generation, a deepfake goes from being a static recording to a live, lifelike avatar that can convincingly have a phone conversation.

< CLONING A VOICE Crafting a compelling high-quality deepfake, whether video or audio, is not the easiest thing to do. It requires a wealth of artistic and technical skills, powerful hardware and a fairly hefty sample of the target voice.

There are a growing number of services offering to produce moderate- to high-quality voice clones for a fee, and some voice deepfake tools need a sample of only a minute long, or even just a few seconds, to produce a voice clone that could be convincing enough to fool someone. However, to convince a loved one – for example, to use in an impersonation scam – it would likely take a significantly larger sample.
https://businessmirror.com.ph/2023/04/12/voice-deepfakes-are-calling-heres-what-they-are-and-how-to-avoid-getting-scammed/

ab23

ChatGPT has opened a new front in the fake news wars

Search engines with the latest ‘generative AI’ obscure the sources for their responses. The result is a breeding ground for disinformation, writes Jessica Cecil.

https://www.chathamhouse.org/publications/the-world-today/2023-04/chatgpt-has-opened-new-front-fake-news-wars


ab23

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused
The AI chatbot can misrepresent key facts with great flourish, even citing a fake Washington Post article as evidence


One night last week, the law professor Jonathan Turley got a troubling email. As part of a research study, a fellow lawyer in California had asked the AI chatbot ChatGPT to generate a list of legal scholars who had sexually harassed someone. Turley’s name was on the list.
The chatbot, created by OpenAI, said Turley had made sexually suggestive comments and attempted to touch a student while on a class trip to Alaska, citing a March 2018 article in The Washington Post as the source of the information. The problem: No such article existed. There had never been a class trip to Alaska. And Turley said he’d never been accused of harassing a student.

A regular commentator in the media, Turley had sometimes asked for corrections in news stories. But this time, there was no journalist or editor to call — and no way to correct the record.
https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

abr23

The Generative AI News (GAIN) rundown for April 6, 2023, focused on regulators and OpenAI, ChatGPT’s popularity compared to the iPhone, deepfake disclosure, authentication and ownership, monetizing those generative AI models, what’s Meta doing, and more.

Bret Kinsella (that’s me) hosts this week with guests Nina Schick, the author of the 2020 book Deepfakes, and Eric Schwartz, head writer at Voicebot.ai. The top stories in generative AI land this week include:

CHATGPT GETS BANNED

https://voicebot.ai/2023/04/08/generative-ai-news-rundown-chatgpt-gets-banned-deepfakes-get-provenance-bing-chat-gets-ads-meta-canva-more-voicebot-podcast-ep-313/


fev23

GPT-powered deepfakes are a ‘powder keg’

Dozens of startups are using generative AI to make shiny happy virtual people for fun and profit. Large language models like GPT add a complicated new dimension

https://www.fastcompany.com/90853542/deepfakes-getting-smarter-thanks-to-gpt

 fev23

ChatGPT, An Artificial Intelligence Chatbot, Is Impacting Medical Literature

https://www.arthroscopyjournal.org/article/S0749-8063(23)00033-6/fulltext

No comments:

Post a Comment