Everyone is talking about generative AIs – including cybercriminals in the crypto world. A report from an analyst reveals how quickly fraudsters can exploit the new technology. Two aspects are particularly alarming.
The old adage that criminals are among the most enthusiastic early adopters of new technologies seems to be proving true once again. According to a report by blockchain analyst Elliptic, artificial intelligence (AI) is increasingly being used for crimes related to crypto. The focus is naturally on various types of fraud.
For example, fraudsters use generative AI to create deepfakes of prominent personalities, such as Elon Musk, former Singapore Prime Minister Lee Hsien Loong, or former Taiwanese Presidents Tsai Ing-wen and Lai Ching-te. These then lure unsuspecting victims into fraudulent projects on platforms like YouTube or TikTok, much like the long-running „Shark Tank“ scam, where an existing, trustworthy authority is exploited for a crypto scam. Only with AI, it becomes even more cunning and sophisticated.
AI is also often used as a buzzword to promote tokens or investment programs among the masses. An example is the iEarn trading bot scam of 2023, which promised its investors handsome returns on the crypto markets using the new wonder technology. It ended in losses of several million and a warning from the Commodity Futures Trading Commission (CFTC). According to a graphic from the report, thousands of tokens are circulating on blockchains like BNB, Solana, and Ethereum, advertised with buzzwords like „GPT,“ „OpenAI,“ or „Bard.“
Large language models (LLMs) are also useful for identifying vulnerabilities in open-source code. Both Microsoft and OpenAI report that more and more cybercriminals and hackers are using LLMs. There are already paid tools for hackers, such as HackedGPT or WormGPT.
Finally, according to Elliptic, AI serves as a kind of turbocharger for disinformation campaigns. Posts on social media, in text, image, and possibly video, are automatically generated, sometimes including the necessary infrastructure like accounts and fake websites. This disinformation is often part of a scam. There are even „scam-as-a-service“ providers that claim to use AIs to automatically design websites, including search engine optimization.
Lastly, AI also enhances identity theft. The forgery of IDs and other documents, such as driver’s licenses, tax returns, or utility bills, is perfected and simplified by AI. Here, too, there are already service providers who generate such documents for a small fee; deepfakes could soon undermine video identification procedures.
The last two points are particularly concerning. They undermine certainty, truth, and identity. When AIs are used in disinformation campaigns with fake images, videos, and audio tracks, it becomes impossible to distinguish truth from lies; when AIs perfect the forgery of identity documents, it becomes impossible to verify identity online. Both issues go far beyond crypto fraud. They shake the fundamental pillars of our electronic life.
While Elliptic emphasizes that AI has enormous positive potential, the analyst warns that time is running out. Pandora’s box has only been partially opened; to address the threats posed by this technology, a window of opportunity remains open. But to seize it, law enforcement agencies, compliance experts, AI developers, and others must work together decisively.
Credit: Source link