Update June 7, 13:40 UTC: This article has been updated to add comments received from Elliptic.
The rise of artificial intelligence (AI)-driven crypto crimes marks a new era of cyber threats, with an Elliptic report exposing how advanced technologies are being exploited for deepfake scams, state-sponsored attacks and other sophisticated illicit activities.
The report cites a troubling advertisement for an “unethical” generative pre-trained transformer (GPT) on the dark web, which claims, “AI has two faces, just like humans.”
This duality can be further seen in the WormGPT advertisement highlighted in the report:
“Embrace the dark symphony of code, where rules cease to exist, and the only limit is your imagination. Together, we navigate the shadows of cyberspace, ready to conquer new frontiers. What’s your next move?”
Dr. Arda Akartuna, senior crypto threat researcher at Elliptic told Cointelegraph:
“[…] these trends are currently in their relative infancy and avenues for prevention do exist. Stakeholders across industries will need to come together to devise best practices early on, so that these trends do not become mainstream.”
Related: Over 35 fake Elon Musks live-streamed during SpaceX launch
Deepfake deep-dive
Elliptic revealed that deepfake videos featuring Elon Musk and former Singaporean Prime Minister Lee Hsien Loong are being used to promote fraudulent investment schemes.
“Doctored videos – or ‘deepfakes’ – of notable individuals promoting investment scams have targeted the likenesses of Elon Musk, former Singaporean Prime Minister Lee Hsien Loong and both the 7th and 8th Presidents of Taiwan Tsai Ing-wen and Lai Ching-te.”
The report also highlighted the increasing prevalence of deepfake deceptive tactics scammers employ on social media communities, duping unsuspecting victims into handing over their funds.
“Crypto giveaway and doubling scams are increasingly using deepfake videos of crypto CEOs and celebrities to encourage victims to send funds to scam crypto addresses.”
Related: US Treasury: AI brings huge opportunities, risks to financial stability
United States warns of Korean criminality
According to the report, Anne Neuberger, the U.S. Deputy National Security Advisor for Cyber and Emerging Technologies, also addressed the growing concerns about AI criminality.
Neuberger warned that AI is also being misused for purposes other than everyday scams:
“Some North Korean and other nation-state and criminal actors have been observed trying to use AI models to accelerate the creation of malicious software and identifying vulnerable systems.”
Related: Meta faces backlash in EU for AI data usage without user consent
AI-crime doesn’t pay
The report elaborates on the extent of AI’s misuse, which can be observed on dark web forums.
“Throughout numerous dark web cybercrime forums, Elliptic has identified chatter that explores the use of LLMs to reverse-engineer crypto wallet seed phrases, bypassing authentication for services such as OnlyFans, and providing alternatives to image ‘undressing’ manipulation services such as DeepNude.”
As activity on the dark web increases, so does the potential for downfall, as evidenced by the recent arrest of a dark web market owner in New York on May 18.
The 23-year-old man was charged with owning, running and profiting from a $100 million dark web narcotics marketplace after the FBI tracked his crypto transfers.
Magazine: Longevity expert: AI will help us become ‘biologically immortal’ from 2030
Credit: Source link