Friday, March 20, 2020

Consequências - crime

set22
Deepfakes simultaneously represent both the best as well as the worst of what technology can do. It has managed to push the field of machine learning to previously unforeseen heights, but in spite of the fact that this is the case it has also turned into a tool of misinformation and propaganda with all things having been considered and taken into account.

There has been some progress in the attempt to distinguish deepfakes from genuine recordings, but with all of that having been said and now out of the way, it is important to note that Microsoft’s Chief Science Officer, Eric Horvitz, opines that there are new and greater threats on the horizon. He recently put out a new research paper that highlights two separate classes of deepfake threats, namely interactive and compositional deepfakes.

Interactive deepfakes are relatively accurate representations of individuals that you can have a conversation with, and many would struggle to figure out whether the person on the other end was real or just a simulation. Compositional deepfakes are even more dangerous because of the fact that this is the sort of thing that could potentially end up allowing threat actors to create entire artificial histories that can lend their deepfakes more credence and make them harder to dispute than might have been the case otherwise.

Horvitz has frequently commented on the threats that deepfakes pose, but advanced in this technology is taking the threats to a whole other level. However, he is not just a pearl-clutcher in that respect. Rather, Horvitz has tried to come up with adequate solutions that private and government enterprises can use to fight against deep fakes.

https://www.digitalinformationworld.com/2022/09/the-threat-of-deepfakes-is-about-to-get.html

ago22
cibersegurança

Deepfakes are increasingly being used in cyberattacks, a new report said, as the threat of the technology moves from hypothetical harms to real ones.

Reports of attacks using the face- and voice-altering technology jumped 13% last year, according to VMware's annual Global Incident Response Threat Report, which was released Monday. In addition, 66% of the cybersecurity professionals surveyed for this year's report said they had spotted one in the past year.

https://www.cnet.com/tech/services-and-software/deepfakes-pose-a-growing-danger-new-research-says/


mai22
Cryptocurrency scammers

Cryptocurrency scammers are using deep fake videos of Elon Musk and other prominent cryptocurrency advocates to promote a BitVex trading platform scam that steals deposited currency.

This fake BitVex cryptocurrency trading platform claims to be owned by Elon Musk, who created the site to allow everyone to earn up to 30% returns on their crypto deposits.
https://www.bleepingcomputer.com/news/security/elon-musk-deep-fakes-promote-new-bitvex-cryptocurrency-scam/


ma22
PUBLICAÇÔES CIENTIFICAS
There is an increasing risk of people using advanced artificial intelligence, particularly the generative adversarial network (GAN), for scientific image manipulation for the purpose of publications. We demonstrated this possibility by using GAN to fabricate several different types of biomedical images and discuss possible ways for the detection and prevention of such scientific misconducts in research communities.
https://www.sciencedirect.com/science/article/pii/S2666389922001015



ab22

Deepfake technology is set to be used extensively in organized crime over the coming years, according to new research by Europol.

Deepfakes involve the application of artificial intelligence to audio and audio-visual consent “that convincingly shows people saying or doing things they never did, or create personas that never existed in the first place.”
https://www.infosecurity-magazine.com/news/europol-deepfakes-organized-crime/


abr22

(tailândia) Scam call centres are now using ‘deepfake’ videos to make their scams look more realistic. In a ‘deepfake’ clip circling online, a policeman from Chon Buri appears to be talking, but his strange mouth movements give away that the video is not real. A netizen posted the clip online to warn others not to fall for the scam. The clip shows a video call between a ‘police officer’ and a desired victim. However, the victim didn’t fall for the trick as they could tell the scammer was using deepfake technology to pose as a police officer because of his mouth movements. When the scammer laughed, it created unnatural looking movements on the police officer’s face. The innocent citizen played along with the scam and recorded the call and later posted the clip on TikTok to warn other members of the public not to fall for the scam. https://thethaiger.com/news/national/call-centre-scammers-make-deepfake-video-of-chon-buri-police-officer-to-lure-in-victims + https://www.sanook.com/news/8542654/?fbclid=IwAR3GwXszaGk3hpwTI8y2r4wPBiLvkwRps00AaJNNCxT70EFm2EZqnAxm7L4



mar22
Move Over Global Disinformation Campaigns, Deepfakes Have a New Role: Corporate Spamming.
Stanford researchers turned up over 1,000 LinkedIn accounts using images that they say appear to have been created by a computer
https://gizmodo.com/move-over-global-disinformation-campaigns-deepfakes-ha-1848716481


A new report says fake accounts with computer-generated faces are now a thing on the professional networking website, LinkedIn. Renée DiResta of the Stanford Internet Observatory, who had made a name for herself in the past analyzing Russian disinformation campaigns online, said she began the research after she came across a profile on the website that had tried to connect with her. The person asked her, “Quick question — have you ever considered or looked into a unified approach to message, video, and phone on any device, anywhere?” The bio in this person’s profile included an undergraduate business degree from New York University. Some of the interests included CNN and Melinda French Gates. https://siliconangle.com/2022/03/28/researchers-say-deepfake-accounts-making-rounds-linkedin/


mar22
TAILANDIA

Deputy police spokesman Pol Colonel Siriwat Deephor said on Friday that so-called “call centre gangs” have adopted the “Deepfake” technology to convince their potential victims that they are getting a video call from a police officer.

In a recent incident, scammers used a publicly available video clip of a police officer giving an interview and superimposed the lower part of his face to look like the policeman was speaking to a potential victim.

https://www.nationthailand.com/in-focus/40013287


jan22
Romance fraud uses the guise of a genuine relationship to deceive a victim for a financial gain. Each year, millions of individuals globally lose money to these approaches. Current prevention messaging focuses heavily on promoting the use of internet searches (specifically reverse image searches) to verify or refute the identity/scenario that one is being presented with. For those who choose to do this, it can be successful and avoid initial financial losses or reduce the overall amount of money lost to an offender. However, as technology evolves, it is likely offenders will alter their methods to deceive victims. This is already evident through the rapid progression and improvements of artificial intelligence and deepfakes to create unique images.
https://link.springer.com/article/10.1057/s41300-021-00134-w

dez21

Last March, the Chinese government’s facial recognition service was hacked, and more than $76 million was stolen through fake tax invoices. The hackers manipulated personal data and high-definition photos purchased on the black market and hijacked the camera of a mobile phone to fool the facial authentication step. The fraudsters fed the deep fake videos to complete the certification. Xinhua Daily Telegraph’s investigation found the cost of hacking facial authentication systems for illegal gain is very low. Image manipulation apps like Huo Zhaopian, Fangsong Huanlian and Ni Wo Dang Nian are available for download on the app store. Apps like Zao use AI to replace faces of film or TV clips with images of anyone the user uploads. “This application places the tools of creating deep-fake videos in the smartphones and mobile devices of millions of users,” claims Zao. https://analyticsindiamag.com/how-to-fool-facial-recognition-systems/


dez21
FRAUDE???

Y también hemos visto cómo un deepfake consiguió dañar la reputación de una empresa de refrescos en España. La empresa no se dio cuenta hasta que el director de marketing accedió a Twitter y vio que había muchas menciones a la marca.

Esas menciones eran muy negativas y hacían referencia a un vídeo. Ese vídeo era, nada más y nada menos que Fernando, el CEO de la propia empresa, hablando mal del producto que vende y criticando a los clientes que lo consumen.

El director de marketing de la firma ha afirmado a INCIBE (el Instituto Nacional de Ciberseguridad) que la persona que aparecía en el vídeo tenía la cara, la voz y hasta los gestos del CEO de la empresa. No encontraban una respuesta al principio, hasta que la encontraron de la mano de expertos en seguridad cibernética: un ciberdelincuente había elaborado un deepfake de Fernando utilizando su cara y su voz para dañar la reputación de esta. "El deepfake es una técnica que permite superponer en un vídeo el rostro de una persona en el de otra, añadiendo su voz y sus gestos para que parezcan los de la persona suplantada", recuerdan desde INCIBE.

https://www.genbeta.com/actualidad/historia-como-deepfake-consiguio-danar-reputacion-empresa-refrescos-espana-asi-nuevos-ciberdelitos


nov21
COPYRIGHT

A heart-warming commercial starring one of the most beloved actors of Turkish cinema appeared on Turkish television at the beginning of 2021. Thanks to deepfake technology, Kemal Sunal was able to star in a recent Ziraat Bank commercial more than 20 years after his passing. While this put a smile on the faces of many spectators, it has also given rise to a series of copyright questions.

There is global debate over whether copyright should subsist in works generated by artificial intelligence (“AI”) systems or whether a sui generis right should be granted to AI-generated works. Another heated topic of discussion is whether AI should be granted legal personality, which would enable the AI to be considered the author of the work. In this article, we will focus on the more specific question of who holds the copyright on deepfakes generated by AI with human intervention under the current laws.1 https://www.lexology.com/library/detail.aspx?g=ef6e81a2-422b-44ee-b640-d536b8080044


out21
Um banco de Dubai, nos Emirados Árabes Unidos, foi assaltado por bandidos que usaram clonagem de voz por Inteligência Artificial para desviar US$35 milhões. Documentos judiciais descobertos pela Forbes revelaram que os fraudadores reproduziram a voz de um executivo de uma grande empresa para enganar o gerente da instituição financeira e transferir o montante para sua posse. De acordo com a revista, o suposto executivo teria dito que sua empresa estava prestes a fazer uma aquisição e precisava de dinheiro para isso. O gerente do banco de Dubai reconheceu a voz do executivo por já ter trabalhado com ele antes. Então, acreditando que tudo era legítimo, ele autorizou a transferência – direto para as contas dos criminosos. https://olhardigital.com.br/2021/10/20/ciencia-e-espaco/em-dubai-criminosos-usam-clonagem-de-voz-por-ia-para-roubar-banco/ + https://interestingengineering.com/fraudsters-pulled-off-a-35-million-bank-heist-by-cloning-the-directors-voice  "Deepfakes: sua “voz” vai roubar você" em https://neofeed.com.br/blog/home/deepfakes-sua-voz-vai-roubar-voce/

In early 2020, a bank manager in the Hong Kong received a call from a man whose voice he recognized—a director at a company with whom he’d spoken before. The director had good news: His company was about to make an acquisition, so he needed the bank to authorize some transfers to the tune of $35 million. A lawyer named Martin Zelner had been hired to coordinate the procedures and the bank manager could see in his inbox emails from the director and Zelner, confirming what money needed to move where. The bank manager, believing everything appeared legitimate, began making the transfers. What he didn’t know was that he’d been duped as part of an elaborate swindle, one in which fraudsters had used “deep voice” technology to clone the director’s speech, according to a court document unearthed by Forbes in which the U.A.E. has sought American investigators’ help in tracing $400,000 of stolen funds that went into U.S.-based accounts held by Centennial Bank. The U.A.E., which is investigating the heist as it affected entities within the country, believes it was an elaborate scheme, involving at least 17 individuals, which sent the pilfered money to bank accounts across the globe. Little more detail was given in the document, with none of the victims’ names provided. The Dubai Public Prosecution Office, which is leading the investigation, hadn’t responded to requests for comment at the time of publication. Martin Zelner, a U.S.-based lawyer, had also been contacted for comment, but had not responded at the time of publication https://www.forbes.com/sites/thomasbrewster/2021/10/14/huge-bank-fraud-uses-deep-fake-voice-tech-to-steal-millions/?sh=3db350bf7559

========= AULA 21=======
set21
Security risks are, I would argue, the single most important near-term danger associated with the rise of artificial intelligence. It is critical that we form an effective coalition between government and the commercial sector to develop appropriate regulations and safeguards before critical vulnerabilities are introduced. https://www.marketwatch.com/story/deepfakes-a-dark-side-of-artificial-intelligence-will-make-fiction-nearly-indistinguishable-from-truth-11632834440

set21
As technology becomes more and more available, fraudsters have started to use it in different spheres of life. On Wednesday, the Times of India reported about a new type of cybercrime in India: sex extortion involving deepfake porn videos not of the victim but of the fraudster, at least at first. Usually, a scammer (using a profile photo of a woman, almost certainly a fake one) sends a friend request to the target person on social media. Then the scammer  exchanges text messages with the victim to build some trust and finally makes a video call. The fraudster uses computer-generated video of a woman to entice the victim masturbate , which scammers record and use afterward to blackmail the person. At least two men in the western Indian city of Ahmedabad reported such extortion calls to the police. One of them complained that fraudsters demanded about $3,000. The men themselves reportedly didn’t realize the woman on video was a deepfake—it was revealed after the police investigation. (One of the men said he “did not indulge in any obscenity,” so it’s unclear what he was being blackmailed with.) https://slate.com/technology/2021/09/deepfake-video-scams.html 

ago21
Recorded Future, an incident-response firm, noted that threat actors have turned to the dark web to offer customized services and tutorials that incorporate visual and audio deepfake technologies designed to bypass and defeat security measures. Just as ransomware evolved into ransomware-as-a-service (RaaS) models, we’re seeing deepfakes do the same. This intel from Recorded Future demonstrates how attackers are taking it one step further than the deepfake-fueled influence operations that the FBI warned about earlier this year. The new goal is to use synthetic audio and video to actually evade security controls. Furthermore, threat actors are using the dark web, as well as many clearnet sources such as forums and messengers, to share tools and best practices for deepfake techniques and technologies for the purpose of compromising organizations. https://venturebeat.com/2021/08/28/deepfakes-in-cyberattacks-arent-coming-theyre-already-here/


jul21
Sceptics might ask: why should cybercriminals take the trouble of creating elaborate deepfake audio or even video material to attack businesses, when simple phishing has proven effective enough to compromise company networks with ransomware? This is a valid point, but there are four aspects that lead to the inflexion point scenario nonetheless: first, cybercriminals are known to quickly adopt the latest technology. Criminals have used fake e-mails to defraud businesses for decades (business e-mail compromise, BEC) – now they have a way to move from simple e-mails to powerful audio and video media, so they will definitely utilise it to expand their toolset. https://gadget.co.za/tackling-the-new-deepfake-threat-how-to-fight-an-evil-genie/

jul21

Voice biometrics: the new defense against deepfakes

https://www.techradar.com/news/voice-biometrics-the-new-defense-against-deepfakes


jun21
A recent survey published by researchers at Microsoft, Purdue, and Ben-Gurion University, among others, explores the threat of this “offensive AI” on organizations. It identifies different capabilities that adversaries can use to bolster their attacks and ranks each by severity, providing insights on the adversaries. The survey, which looked at both existing research on offensive AI and responses from organizations including IBM, Airbus, Airbus, IBM, and Huawei, identifies three primary motivations for an adversary to use AI: coverage, speed, and success. AI enables attackers to “poison” machine learning models by corrupting their training data, as well as steal credentials through side channel analysis. And it can be used to weaponize AI methods for vulnerability detection, penetration testing, and credential leakage detection. https://venturebeat.com/2021/07/02/attackers-use-offensive-ai-to-create-deepfakes-for-phishing-campaigns/ 


jun21
Health passes and deepfakes: PayPal account details are widely available online, and the list of illegal good on offer also includes distributed denial of service (DDoS) attack services, digital health passes and deepfakes. “Malicious actors are also talking about ways to obtain COVID passports or vaccination certificates,” Pascu observes, “but this activity is currently flagged as illegal on most marketplaces ‘for ethical purposes,’ as stated on a group we monitor.” Deepfakes are also gaining momentum and popularity on the dark web, with criminal deepfake activity developing into an economic niche. Deepfake services were found offered on a Hack Forum for $20 per minute of fraudulent video last June. “In addition to posts about deepfakes impersonating celebrities in adult videos, there are forums listing different schemes and tools to create your own for use in identity verification, and there is some interest in methods to make money with deepfakes,” comments Pascu
https://www.biometricupdate.com/202106/biometric-selfies-and-forged-passports-identities-for-sale-on-the-dark-web

jun21
"Once you've broken the process, you can quickly generate a large number of accounts," he said, adding that this would make money laundering easier, and help criminals carry out fraud on online platforms.
The threat goes beyond deepfakes. Malicious uses of artificial intelligence can range from AI-powered malware, AI-powered fake social media accounts farming, distributed denial-of-service attacks aided by AI, deep generative models to create fake data and AI-supported password cracking, according to a report by the EU's cybersecurity agency published in December.
They also found cheap software offerings that can mislead platforms like streaming services and social media networks in order to create smart bot accounts. In France, a group of independent music labels, collecting societies and producers are complaining to the government about “fake streams,” whereby tracks are shown to be played by bots, or real people hired to artificially boost views, benefiting the artist whose tracks are played.
https://www.politico.eu/article/artificial-intelligence-criminals/ 

mai21

Deepfakes could be the next big security threat to businesses

https://www.techradar.com/news/deepfakes-could-be-the-next-big-security-threat-to-businesses


mai21
CIBERCRIME:
https://www.thomsonreuters.com/en-us/posts/investigation-fraud-and-risk/acams-deepfake-cyber-crimes/

Mai21
FRAUDE
Can the Zoom Cat filter open up a portal for deepfakes? The potential identity fraud and security risks video calls poses. https://www.techradar.com/news/can-the-zoom-cat-filter-open-up-a-portal-for-deepfakes

Mai21
SEGUROS
Insurance coverage for deepfakes is in its nascent stages. According to a 2020 report prepared by Marsh, deepfakes “are outpacing the law,” and insurers are assessing the potential risks to businesses. https://www.propertycasualty360.com/2021/05/12/deepfakes-and-insurance-coverage-414-202184/?slreturn=20210412094328 

abr21

Threat intelligence company Recorded Future pointed to a recent surge in such activities and a burgeoning underground marketplace that could spell trouble for individuals and companies that use tools like facial identification technology as part of multi-factor authentication. The report mirrors similar conclusions from an FBI alert last month warning that nation-backed hackers would themselves begin using deepfakes more frequently for cyber operations as well as misinformation and disinformation.https://www.cyberscoop.com/deepfakes-doctored-video-audio-future/


ab21
CHANTAGEM
Au Canada, des sextorqueurs sévissent sur Internet à l’aide de “deepfakes”
https://www.courrierinternational.com/article/le-mot-du-jour-au-canada-des-sextorqueurs-sevissent-sur-internet-laide-de-deepfakes


ab21
Falsificação de imagens satélite
A growing problem of 'deepfake geography': How AI falsifies satellite images
https://www.eurekalert.org/pub_releases/2021-04/uow-agp042121.php
https://www.tandfonline.com/doi/full/10.1080/15230406.2021.1910075 

Ab21
FRAUDE
as with any new fraud trend, organizations don’t want to wait until they’ve been breached to react. As appetite grows and the technical barrier falls, it’s critical that organizations consider whether their identity verification solutions are up to the task of identifying deepfakes.
https://www.techradar.com/news/how-concerned-should-you-be-about-deepfake-fraud

In a video interview with Information Security Media Group, Gupta discusses:

  • The challenge of deepfake video and audio detection;
  • How fraudsters are using deepfakes for banking and payments fraud;
  • The need for financial services industry collaboration in mitigating digital onboarding fraud.
  • https://www.bankinfosecurity.com/countering-deepfake-fraud-in-digital-onboarding-a-16337

ABR21
SAÙDE

There haven't been documented cases of malicious use of deepfakes in healthcare to date, and many of the most popular deepfakes—such as the viral Tom Cruise videos—took weeks of work to create and still have glitches that tip off a close watcher. But the technology is steadily getting more advanced.

Researchers have increasingly been watching this space to try to "anticipate the worst implications" for the technology, said Rema Padman, Trustees professor of management science and healthcare informatics at Carnegie Mellon University's Heinz College of Information Systems and Public Policy in Pittsburgh.That way, the industry can get ahead of it by raising awareness and figuring out methods to detect such altered content. "We are starting to think about all of these issues that might come up," Padman said. "It could really become a serious concern and offer new opportunities for research." Industry experts suggested five possible ways deepfakes could infiltrate healthcare. 

https://www.modernhealthcare.com/cybersecurity/5-ways-deepfakes-could-infiltrate-healthcare


mar21
Homem de 50 anos usa deepfakes para se passar por influencer feminina no Japão. https://olhardigital.com.br/2021/03/22/internet-e-redes-sociais/homem-se-passa-por-jovem-usando-deep-fakes/ 

mar21

(seriam mesmo DF?) Uma mulher de Chalfont, no estado norte-americano da Pennsylvania, foi presa acusada de criar “deepfakes” para prejudicar rivais da filha na equipe de animadoras de torcida (cheerleaders) da escola. Raffaela Spone, de 50 anos, criava imagens falsas das jovens nuas, fumando ou bebendo, e as encaminhava para os responsáveis pela equipe na tentativa de fazer com que fossem expulsas. Também mandava as imagens para as próprias vítimas, encorajando-as a se matar. https://olhardigital.com.br/2021/03/15/seguranca/mulher-e-presa-apos-criar-deepfakes-para-prejudicar-rivais-da-filha/ Pennsylvania cops who charged a woman with creating incriminating 'deepfake' photos and videos of her teenage daughter's cheerleading rivals have now admitted that they aren't sure if the images were actually manipulated. https://www.dailymail.co.uk/news/article-9592117/Cops-accused-woman-creating-deepfake-images-never-evidence.html Raffaela was eventually arrested on six counts of misdemeanor harassment and cyber harassment of a child. But that’s not what spun the case into an international scandal. In the criminal complaint, Hilltown Township police officer Matthew Reiss declared that the video of Madi vaping had the hallmarks of a “deepfake,” “where a still image can be mapped onto an existing video and alter the appearance of the person in the video to show the likeness of the victim’s image instead.” In his telling, he’d arrested a middle-age suburban mom for wielding the power of advanced AI technology against her daughter’s competition, creating a fake video so uncannily convincing that it could have gotten Madi kicked off the team. (...) According to him, there are only a handful of people in the U.S. capable of properly vetting a deepfake, “using specific computational forensic techniques, going through it frame by frame to comb for clues to be able to say with authority if it is real or not.” When the Daily Dot eventually asked Reiss, the police officer, whether he ran a metadata analysis on the video, he admitted that he’d simply made a “naked eye” judgment call. https://www.cosmopolitan.com/lifestyle/a37377027/deep-fake-cheer-scandal/ 

mar21
First, the study applies the situational crime prevention approach in the context of moderating online platforms. Second, results from the study shed light on current practices in online content moderation from the perspective of criminological theory, as well as inform specific actions that can be taken to decrease the presence of community-harming phenomena and improve the enforcement of sitewide policy rules in general. Finally, by adapting the original 25 techniques of situational crime prevention to online content moderation, the study suggests a tentative roadmap for similar research in the future. https://www.emerald.com/insight/content/doi/10.1108/S2050-206020210000020008/full/html


mar21
According to a report published last year by University College London (UCL), deepfakes rank as the most serious AI crime threat.
“As the capabilities of AI-based technologies expand, so too has their potential for criminal exploitation. To adequately prepare for possible AI threats, we need to identify what these threats might be, and how they may impact our lives,” author Lewis Griffin stated in the report.
https://www.arabnews.com/node/1820691/media 

fev21
While digital ID verification is gaining global momentum, it is also attracting new threats in the form of advanced deepfake software found on the dark web, writes cyber-intelligence firm Gemini Advisory.
https://www.biometricupdate.com/202102/digital-id-verification-increasingly-targeted-by-ai-deepfakes-advisory-warns 

Jan21
Cibercrime

The use of deep fake video and audio technologies could become a major cyber threat to businesses within the next two years, cyber analytics specialist CyberCube has predicted. In its new report, Social Engineering: Blurring reality and fake, CyberCube says the ability to create realistic audio and video fakes using AI and machine learning has grown steadily. https://www.tahawultech.com/news/deep-fake-losses-could-be-major-cybercube-warns/ 


dez20
direitos de autor

Currently the law does not stop the work of performers and others from being copied by new AI technology which can create imitations of people. Legal reforms are needed to protect people from their image being copied by “deepfake” or AI technology, an expert has warned LINK


dez20
Cybersecurity experts at Avast, foresee more Covid-19 vaccination scams, abuse of weak home office infrastructures, enterprise VPN infrastructure and providers, and ransomware attacks in 2021.
In a press release here, Avast said it also expects deepfake disinformation campaigns and other malicious AI-generated campaigns to gain more traction. Specifically, on the Android platform, Avast experts predict further adware attacks, fleeceware scams, and stalkerware usage.
https://newstodaynet.com/index.php/2020/12/11/2021-to-see-increased-cyber-scams-avast/


Nov20
fRAUDE COM POLITICA!

One of the big ones are imposters. There's a lot of people that go in and impersonate a political candidate, say we've caught a few with Bernie Sanders for example, and a couple with a President Trump. So those imposters go around, try to spread a slightly different message or they try to collect donations from people and obviously then disappear with the donations.Similar to that, there's also scams where they're not trying to necessarily impersonate the political candidate, but they are trying to get money or personal identifiable information so they could perhaps later then try to break into a person's accounts.link


GANHAR DINHEIRO
jul20

The tracking service highlighted a few of the more successful operations, including “the Giveaway,” which features a celebrity, such as Elon Musk, which can net around $300,000.“The change in method and the increase in quality and scale suggests that entire professional teams are now behind some of the most successful ones and it is just a matter of time before they start using deepfakes, a technique that will surely revolutionize the scam market,” Whale Alert claimed. https://nypost.com/2020/07/13/bitcoin-scammers-have-reportedly-swiped-24m-this-year/


nov20
New Report Finds that Criminals Leverage AI for Malicious Use - And It's Not Just Deep Fakes
For example, AI could be used to support:
Convincing social engineering attacks at scale
Document-scraping malware to make attacks more efficient
Evasion of image recognition and voice biometrics
Ransomware attacks, through intelligent targeting and evasion
Data pollution, by identifying blind spots in detection rules
https://news.yahoo.com/report-finds-criminals-leverage-ai-140000319.html 

nov20
Deepfakes have the potential to disrupt financial markets, not just fake your bank ID, experts say. Experts have been sounding the alarm about weak biometric data security for years. The problem has looked especially pernicious in China, where facial recognition is now a ubiquitous form of identification. Now the rise of deepfakes could be creating new problems, experts say. Despite widespread concern about technology, which allows a person’s likeness to be imitated through audio and video, there are few real-world examples of hackers successfully exploiting it for monetary gain. https://www.thestar.com.my/tech/tech-news/2020/11/17/deepfakes-have-the-potential-to-disrupt-financial-markets-not-just-fake-your-bank-id-experts-say

nov20
Deepfakes have the potential to disrupt financial markets, not just fake your bank ID, experts say. Experts say deepfakes have the potential to disrupt financial markets and aid fraud, but there remain few real-world examples so far. People in China are growing increasingly concerned about deepfakes and biometric data leaks as the use of facial recognition grows.
https://www.scmp.com/tech/innovation/article/3109565/deepfakes-have-potential-disrupt-financial-markets-not-just-fake


nov20
(ameaças às empresas)

But deepfakes don’t just threaten the security of elections. A survey by Tessian also revealed that 74% of IT leaders think deepfakes are a threat to their organization's security.So, how deepfakes could compromise your company’s security?“Hacking humans” is a tried and tested method of attack used by cybercriminals to breach companies’ security, access valuable information and systems, and steal large sums of money. And hackers are getting better at it by using more advanced and sophisticated techniques. Social engineering scams and targeted spear phishing attacks, for example, are fast becoming a persistent threat for businesses. Hackers are successfully impersonating senior executives, third-party suppliers, or other trusted authorities in emails, building rapport over time and deceiving their victims. In fact, last year alone, scammers made nearly $1.8 billion through Business Email Compromise attacks. These types of spear phishing attacks are much more effective, and have a much higher ROI, than the “spray and pray” phishing campaigns criminals previously relied on. https://www.securitymagazine.com/articles/93476-deepfakes-could-compromise-your-companys-security


nov 20
Remote work is putting companies at greater risk of deepfake phishing attacks, executives at Technologent warned during a cybersecurity webinar last week. In a deepfake attack, criminals use synthetic audio to mimic the tone, inflection and idiosyncrasies of an executive or other employee. Then, they ask for a money transfer or access to sensitive data. https://builtin.com/cybersecurity/deepfake-phishing-attacks


NOV20
direitos dos avatares? Not only are fans taking an interest in the new SM Entertainment girl group aespa but the legal circles are interested in the avatars of the members. The reason is many legal experts are interested in how far the law will be able to protect the avatars from digital sexual crimes (Deepfake) once the group promotes together. On October 28th, SM Entertainment introduced a new concept to the girl group. Aespa is an international group with Korean members Winter and Karina, Japanese member Giselle, and Chinese member Ningning. The group's main concept is the collaboration between the real world and the virtual world. The group's name aespa holds the meaning 'Avatar and experience' including the English word 'Aspect.' https://www.allkpop.com/article/2020/11/will-the-avatars-of-sms-new-girl-group-aespa-be-legally-protected-from-deepfake-pornography-crimes 

Nov20

SCAMS Elderly women are being fooled into thinking celebrities are in love with them in a sophisticated scam on the Chinese version of TikTok. One 61-year-old was ecstatic when an account purporting to belong to TV star Jin Dong followed her on popular video sharing app Douyin. https://www.dailystar.co.uk/news/world-news/elderly-women-chinese-tiktok-scammed-22971322

==================================aula TCM20/21

out20
A California widow was scammed out of at least $287,000 by an unidentified overseas con man who romanced her online using “deepfake” video to pose as the superintendent of the U.S. Naval Academy, according to federal prosecutors.https://www.thedailybeast.com/romance-scammer-used-deepfakes-to-impersonate-a-navy-admiral-and-bilk-widow-out-of-nearly-dollar300000


set20
Since businesses rely on technology for communication, deepfakes—or synthetic media of false images and/or sound—pose a growing threat to their future strength, growth, security, and bottom line. That's the belief and warning from Global IT Solutions Provider Technologent. "Businesses around the globe have already lost money, reputations, and hard-won brand strength due to deepfakes," said Technologent Global Chief Information Security Officer Jon Mendoza. Leading studies show that deepfakes are the most worrisome aspects of AI due to the potential for criminal profit or gain, ease of implementation, and difficulty in stopping them.(1) https://finance.yahoo.com/news/deepfakes-could-mean-deep-losses-155500267.html?guccounter=1&guce_referrer=aHR0cHM6Ly93d3cuZ29vZ2xlLmNvbS8&guce_referrer_sig=AQAAAE4xP9kO2tigeGtjBkv14JWX_w8I-KMzGpdUd4vlBy_9tTLrRrWzqYOUUsMWYjYGVfqSByoBV9J4aNOP2ZLxwQglNVW1kVlIgni-TQMHUgMi6DUKm0XSaZg4VM3xUnwFHCA2MrW2fB1wrWBrnfIn7kv0HfB47uL1CdUv5AU35JUg 

set20
Criminals could soon use deepfake technology to disguise themselves in CCTV footage, experts have warned. https://www.dailystar.co.uk/news/latest-news/prosecutors-told-verify-deepfake-evidence-22705261


set20

Banks and fintech groups are entering partnerships with tech firms to tackle the use of fraudulent 'deepfake' content in biometric ID systems. The financial institutions are teaming up with identification startups like Mitek and iProov, according to The Financial Times. https://www.itpro.co.uk/security/cyber-security/357015/banks-and-fintech-firms-using-tech-firms-to-fight-deepfake-fraud 



agos20
Deepfakes Threaten To Become The New BEC ScamThe business email compromise (BEC) scam continues to rear its ugly head at the enterprise, with the global pandemic creating even more avenues through which cyber attackers can steal company money. https://www.pymnts.com/news/b2b-payments/2020/tessian-deepfake-bec-scam/



ago20
Peter Singer, a cybersecurity strategist, believes the boom of low-quality, self-filmed footage will make deepfake technology harder to stop. Deepfakes soar during coronavirus crisis as criminals 'easily create Zoom footage' https://www.dailystar.co.uk/news/latest-news/deepfakes-soar-during-coronavirus-crisis-22500536


agot20
These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving. https://www.techdirt.com/articles/20200806/13292345065/get-ready-deepfakes-to-be-used-financial-scams.shtml

ago20
Fake audio or video content has been ranked by experts as the most worrying use of artificial intelligence in terms of its potential applications for crime or terrorism, according to a new UCL report. The study, published in Crime Science and funded by the Dawes Centre for Future Crime at UCL (and available as a policy briefing), identified 20 ways AI could be used to facilitate crime over the next 15 years. These were ranked in order of concern—based on the harm they could cause, the potential for criminal profit or gain, how easy they would be to carry out and how difficult they would be to stop. https://phys.org/news/2020-08-deepfakes-ai-crime-threat.html ´+https://www.unite.ai/ai-experts-rank-deepfakes-and-19-other-ai-based-crimes-by-danger-level/ + Biometrics may be the best way to protect society against the threat of deepfakes, but new solutions are being proposed by the Content Authority Initiative and the AI Foundation. Deepfakes are the most serious criminal threat posed by artificial intelligence, according to a new report funded by the Dawes Centre for Future Crime at the University College London (UCL), among a list of the top 20 worries for criminal facilitation in the next 15 years. The study is published in the journal Crime Science, and ranks the 20 AI-enabled crimes based on the harm they could cause. https://www.biometricupdate.com/202008/deepfakes-declared-top-ai-threat-biometrics-and-content-attribution-scheme-proposed-to-detect-them


ul20

One of the stranger applications of deepfakes — AI technology used to manipulate audiovisual content — is the audio deepfake scam. Hackers use machine learning to clone someone’s voice and then combine that voice clone with social engineering techniques to convince people to move money where it shouldn’t be. Such scams have been successful in the past, but how good are the voice clones being used in these attacks? We’ve never actually heard the audio from a deepfake scam — until now. Security consulting firm NISOS has released a report analyzing one such attempted fraud, and shared the audio with Motherboard. The clip below is part of a voicemail sent to an employee at an unnamed tech firm, in which a voice that sounds like the company’s CEO asks the employee for “immediate assistance to finalize an urgent business deal.” https://www.theverge.com/2020/7/27/21339898/deepfake-audio-voice-clone-scam-attempt-nisos

jul20

Last week, Reuters news agency published a story on Oliver Taylor (pictured above), a British student at the University of Birmingham who had written half a dozen op-eds on Jewish affairs, including for The Jewish Times and the Times of Israel. However, the Reuters investigation was not focused on Taylor’s writing, but on the fact that he is not a real person. The persona was found to be entirely fabricated, with the key indicator being the above ‘profile picture’ provided for Taylor. This image is entirely synthetically generated and the man depicted does not exist.

Outros casos: https://deeptracelabs.com/deepfake-detection-api-the-automated-solution-for-identifying-fake-faces/?utm_medium=email&_hsmi=91823245&_hsenc=p2ANqtz-86rLE8EDn22K7coOizdH3WpeH7l5zl8B6P3iAgsfCzBrMcFR19OkpsfZPGA7Dp4aGNicJEQepPoTI1D80gQmft2o89BhUNPFsgAz72WAOPbJC70xc&utm_content=91823245&utm_source=hs_email

jul20

The report on ‘virtual justice’ by New York-based privacy group Surveillance Technology Oversight Project (STOP) noted that parties to online court proceedings may be asked to verify their identity by providing sensitive personal information, biometric data, or facial scans – in the state of Oregon, judges sign into their virtual court systems using facial recognition. It said: “Distrust around digital records has persisted with the advent and ease of photoshopping. Altered evidence can still be introduced if the authenticating party is itself fooled or is lyin.  Video manipulation software, including ‘deepfake’ technology, poses problems for remote courts in verifying evidence and that litigants or witnesses are who they say they are, a report has warned. https://www.legalfutures.co.uk/latest-news/deepfake-warning-over-online-courts



jjul20
Deepfake used to attack activist couple shows new disinformation frontier https://www.reuters.com/article/us-cyber-deepfake-activist/deepfake-used-to-attack-activist-couple-shows-new-disinformation-frontier-idUSKCN24G15E




jul20
Bad actors could use deepfakes—synthetic video or audio—to commit a range of financial crimes. Here are ten feasible scenarios and what the financial sector should do to protect itself https://carnegieendowment.org/2020/07/08/deepfakes-and-synthetic-media-in-financial-system-assessing-threat-scenarios-pub-82237

jul20
Nas empresas:
To insulate your business and companies you advise from the fallout of deepfakes, the most important step is to include a deepfake scenario as part of your crisis communication plan. https://www.prnewsonline.com/deepfakes-preparation-crisis

jun20
Em uma disputa de guarda do filho na Grã-Bretanha, que repercutiu nos EUA, a mulher usou uma cheapfake para adulterar um áudio, que serviria de prova de que o ex-marido a ameaçava. Mas o advogado do marido contratou um perito para analisar o áudio e bastou um estudo de metadados na gravação para expor a adulteração. https://www.conjur.com.br/2020-jun-12/deepfakes-embaralham-justica-eua

jun20
Courts and lawyers struggle with growing prevalence of deepfakesIn an article for the Washington State Bar Association, Pfefferkorn recommends attorneys prepare for deepfake evidence by budgeting for digital forensic experts and witnesses. She says lawyers should know enough about deepfake technology to be able to spot outward signs that the evidence has been tampered with. There are ethical concerns if a client is pushing an attorney to use suspect evidence, she adds.

“You might still need to do your homework and do your due diligence with respect to the evidence that your client brings to you before you bring it to court,” Pfefferkorn says. “If it’s a smoking gun that seems too good to be true, maybe it is.” https://www.abajournal.com/web/article/courts-and-lawyers-struggle-with-growing-prevalence-of-deepfakes



Roubo de identidade
Jun20
Deepfakes and Identity Theft. https://news.clearancejobs.com/2020/06/08/security-clearance-news-update-deepfakes-and-identity-theft/




CIBERCRIME:
maio20
Golpistas começaram a usar uma especie de “deepfake” para roubar criptomoedas. Em um vídeo denuncia publicado no youtube, os trapaceiros virtuais se passam por Justin Sun, o fundador da criptomoeda Tron, uma das mais conhecidas do mercado para enganar vítimas. https://livecoins.com.br/deepfake-golpe-criptomoedas/

qbr20 Police projected cybercrimes to become more meticulous this year, citing the dark web and deepfakes as potential risks. https://en.yna.co.kr/view/AEN20200413005100315

abr20
Nth Room scandal: Here is how Jo Joo Bin victimized popular female idols with deepfake pornography.  Recent probe of Telegram Nth Room probe has revealed that not only minors and adult females, but even female idols were victimized by Jo Joo Bin. Here is how he used idol in deepfake pornography (LINK)

Mar20 “Nos EUA, um áudio fraudulento foi usado em corte, em uma disputa por guarda dos filhos. No áudio, o pai fazia ameaças à mãe. Ela havia manipulado o áudio, usando um aplicativo barato. Isso mostrou que não é preciso ser um nerd da informática para manipular áudios. https://www.conjur.com.br/2020-mar-05/justica-aprender-lidar-provas-deepfakes
 “We will see a huge rise in machine-learned cybercrimes in the near future. We have already seen Deepfake videos imitating celebrities and public figures, but to create convincing materials, cyber-criminals use footage that is already available in the public domain. As computing power increases, we are starting to see this become even easier to create, which paints a scary picture ahead.” (LINK) (LINK)
(SEGURO) The proliferation of the technology can undermine the trust people have in established government and public institutions such as law enforcement and regulators. It can also undermines the trust people have in each other. (LINK)

Mar20 In a recent Marsh report, named ‘Digital Deception: Is Your Business Ready for Deepfakes?’ the brokerage giant explained: “Deepfakes can have a severe impact on a company’s reputation. A deepfake posted on social media could easily go viral and spread worldwide within minutes. Though a company might ultimately prove it was the victim of a deepfake, the damage to its reputation will already have been done, potentially resulting in lost revenues.” https://www.insurancebusinessmag.com/us/news/cyber/cyber-risk-what-are-deepfakes-217668.aspx


06/11/2019 For public companies and their officers and directors, however, the nefarious use of “deepfakes” presents potentially greater legal and financial challenges because of possible illegal attempts at market manipulation (LINK)
6/12/19 Principal ameaça à segurança
6/12/19  Now, cyber criminals can put words into into your mouth, using AI, ML tools LINK
6/12/19: A previsão é de que os deepfakes gerem um grande impacto em todos os aspectos de nossas vidas em 2020, conforme os níveis de realismo aumentem, como os direcionados a alvos de ransomware e sendo usados para adicionar mais um grau de realismo aos pedidos de transferências financeiras; utilizados como ferramenta nas eleições para desacreditar candidatos e entregar mensagens políticas imprecisas para os eleitores através das mídias sociais; e o surgimento dos Deepfakes-as-a-Service conforme os deepfakes tornam-se amplamente adotados por razões que vão da diversão a frauds LINK
IMPORTANTE RELATÓRIO da McAfee: Threat actors and nation-states worldwide, leveraging advances in artificial intelligence (AI) and machine learning, will continue using deepfake video in 2020 for disinformation campaigns and to bypass facial recognition systems, according to cybersecurity firm McAfee. “No documento, a empresa ressalta que a capacidade de criar conteúdo falso não é nova. O que mudou agora é que é possível produzir vídeos falsos muito convincentes sem ser expert em tecnologia. Além de ferramentas audiovisuais gratuitas que oferecem a qualquer pessoa recursos capazes de manipular imagens de forma realista, existem empresas que vendem o serviço online” (LINK) O relatório aqui: https://securingtomorrow.mcafee.com/blogs/other-blogs/mcafee-labs/mcafee-labs-2020-threats-predictions-report/

Uma forma sofisticada de phishing? JAN20 Deepfake fraud is a new, potentially devastating issue for businesses  https://ctovision.com/phishing-today-deepfakes-tomorrow-training-employees-to-spot-this-emerging-threat/

FEV20 Seeing is not believing; Defence in the age of deepfake technology. Phishing emails have become almost routine and, therefore, easier to spot. It’s time for a new chapter in crime- deepfakes: how many of us would say no to our boss over the phone or videoconference? LINK
FEV20 Deepfakes to 'destroy stock market' by fooling buyers into 'sham investments' LINK
Fev20 Deepfakes are a real threat for payment security LINK


Receio de crimes: O app chinês ZAO, que cria deepfakes do usuário em diversas situações, com alto grau de verossimilhança, atualizou seus termos de uso e compromisso com o usuário. Isso ocorreu em resposta a preocupações levantadas sobre potenciais violações de privacidade para quem o utilizasse. Canaltech falou do aplicativo na data de ontem (2), comentando sobre como ele explodiu em audiência nas lojas virtuais no final de semana passado. https://canaltech.com.br/apps/app-que-cria-deepfakes-atualiza-termos-de-uso-por-questoes-de-privacidade-148615/




Para EMPRESAS
Dez19 How Deep Fakes Can Hurt Your Business And What To Do About It (LINK)
CIBERCRIME:
 “We will see a huge rise in machine-learned cybercrimes in the near future. We have already seen Deepfake videos imitating celebrities and public figures, but to create convincing materials, cyber-criminals use footage that is already available in the public domain. As computing power increases, we are starting to see this become even easier to create, which paints a scary picture ahead.” (LINK) (LINK)
(SEGURO) The proliferation of the technology can undermine the trust people have in established government and public institutions such as law enforcement and regulators. It can also undermines the trust people have in each other. (LINK)

29/11 Deepfake Videos: A Growing Cyber Threat for Business https://www.brinknews.com/how-deepfake-videos-could-threaten-your-business/

6/12: Companies should brace themselves for two emerging threats related to artificial intelligence: text-based deepfakes and AI model hacking, cybersecurity experts said. LINK

10/11/2019 Algunos agentes del mercado ya han mostrado su preocupación sobre cómo los usuarios anónimos, con el único incentivo de hacer daño, difunden 'deepfakes' para destruir la reputación de una empresa concreta (LINK)

abr20
The idea behind DeepFake phishing is that you can impersonate someone in a video call or a voice call. So, for example, you could receive a Skype call or a Whatsapp voice note from your “boss” telling you to pay someone an amount of money. Only that later it turns out your boss never sent the message. While DeepFakes worked only on faces and needed to be pre-rendered, they weren’t very practical. Now, we’re seeing the emergence of DeepFake technology that works in real-time, alongside the faking of voices. It’s also no longer necessary to have thousands of source images for the machine learning software to generate the DeepFake. These technology enhancements all come together to make possible a sort of digital impersonation, and it’s rather scary once you think about it. https://www.technadu.com/what-is-deepfake-phishing-and-how-to-protect-yourself/95620/

abr20 there are a few methods to protect against deepfakes today: https://techbeacon.com/security/evolution-deepfakes-fighting-next-big-threat

No comments:

Post a Comment