TL;DR
- Artificial intelligence (AI) is being used by criminals to improve their tactics in the cryptoasset ecosystem.
- Emerging typologies of crimes include the use of deepfakes, fake tokens, cyberattacks, large-scale scams, and identity theft.
- Elliptic has published a report highlighting these risks and offering solutions to mitigate them.
Artificial intelligence (AI) has revolutionized various industries, including crypto assets.
However, like any emerging technology, its advancement has been accompanied by its exploitation for criminal purposes.
Although AI-enhanced crypto crime is not yet a widespread threat, blockchain analytics company Elliptic warns of the need to identify and mitigate these emerging trends to foster sustainable innovation.
One of the most disturbing uses of AI in crypto crime is the creation of deepfakes, images or videos manipulated to appear authentic.
These are used in investment scams, featuring public figures such as Elon Musk or former Singapore Prime Minister Lee Hsien Loong, promoting fake cryptocurrency projects.
These deepfakes spread on platforms like TikTok and x.com, fooling unsuspecting investors.
To detect these fake videos, it is crucial to observe the synchronization of lip movements and voices, realistic shadows, and natural facial activities such as blinking.
Another emerging modality is fake tokens and pump-and-dump schemes.
Scammers create tokens with AI-related names, such as “GPT,” to generate buzz and drive up their value.
They then sell their holdings en masse, leaving investors with worthless assets.
Elliptic has identified numerous scams of this type, highlighting the importance of tracking the creation and movement of these tokens to prevent significant losses.
Cyberattacks facilitated by large language models, such as ChatGPT, are also a growing concern.
These models can generate or review code, potentially helping cybercriminals identify vulnerabilities and create malicious software.
In response, criminals seek out AI tools on dark forums that have no ethical restrictions, such as HackedGPT and WormGPT, to conduct illicit activities.
Using AI to escalate scams and misinformation is another challenge.
Scammers create fraudulent investment, airdrop or giveaway websites, spread them widely, and then disappear with the funds obtained, repeating the process with new sites and campaigns.
The use of AI in the design of these search engine optimized sites makes them more difficult to detect and report.
Mitigating AI-Enhanced Crypto Crime with Elliptic
Facilitating identity theft and the creation of false documents is one of the oldest applications of cybercrime, now enhanced with AI.
Document forgery services that use AI to create fake passports, ID cards, and utility bills are offered on criminal forums.
Elliptic has tracked payments to these services, revealing the scale of their operations and the need for effective measures to counter these crimes.
Despite the risks, the benefits of AI far outweigh its potential for criminal exploitation.
It is crucial that stakeholders develop measured responses to minimize victimization and allow AI to continue to innovate sustainably.
Elliptic is committed to capturing AI-enhanced crypto crime in its underlying intelligence, helping innovators, financial services, cryptocurrency businesses, and law enforcement effectively detect, track, and mitigate these threats.
Elliptic invites participation in its Delphi survey to contribute to industry best practices and gain early access to ideas on how to prevent and mitigate these emerging crime trends.
Additionally, they offer demonstrations of their blockchain analysis tools to help protect your business in the changing crypto crime landscape.