Can AI bots steal your crypto? The rise of digital thieves

Timothy Wuich
12 Min Read

AI Bots and Cryptocurrency Cybercrime

AI bots are self-learning software that automate and continuously enhance cyberattacks in the cryptocurrency space, rendering them more perilous than conventional hacking techniques. Central to today’s AI-driven cybercrime landscape are AI bots—self-learning software programs developed to process extensive amounts of data, make autonomous decisions, and execute intricate tasks without human oversight. While these bots have transformed various industries such as finance, healthcare, and customer service, they have also been weaponized by cybercriminals, especially within the realm of cryptocurrency.

In contrast to traditional hacking methods, which demand manual effort and specialized knowledge, AI bots can completely automate attacks, adjust to new cryptocurrency security protocols, and even fine-tune their strategies over time. This makes them significantly more effective than human hackers, who are constrained by time, resources, and the potential for errors.

Why are AI Bots so Dangerous?

The primary hazard introduced by AI-driven cybercrime is scale. A lone hacker trying to infiltrate a crypto exchange or deceive users into revealing their private keys can accomplish only so much. However, AI bots can execute thousands of attacks simultaneously while continuously refining their techniques.

  • Speed: AI bots can scan millions of blockchain transactions, smart contracts, and websites within minutes, pinpointing vulnerabilities in wallets, which may lead to crypto wallet hacks, decentralized finance (DeFi) protocols, and exchanges.
  • Scalability: While a human scammer may send phishing emails to a few hundred individuals, an AI bot can dispatch personalized and meticulously crafted phishing emails to millions in the same period.
  • Adaptability: Machine learning empowers these bots to enhance their tactics with every failed attack, making them trickier to detect and thwart.

This capacity to automate, adapt, and launch large-scale attacks has resulted in a rise in AI-driven crypto fraud, escalating the need for effective crypto fraud prevention.

Case Study: The Andy Ayrey Incident

In October 2024, the X account of Andy Ayrey, the developer of the AI bot Truth Terminal, was compromised by hackers. The attackers utilized Ayrey’s account to promote a fraudulent memecoin named Infinite Backrooms (IB). This malicious campaign caused IB’s market capitalization to skyrocket to $25 million. Within just 45 minutes, the attackers liquidated their holdings, netting over $600,000.

AI-Driven Scams

AI-powered bots are not only automating crypto scams—they are also becoming more intelligent, targeted, and harder to detect. Below are some of the most dangerous types of AI-driven scams currently employed to steal cryptocurrency assets:

  • Phishing Attacks: While phishing attacks are not a new phenomenon in crypto, AI has escalated their threat level. Today’s AI bots generate personalized emails that closely resemble real communications from platforms like Coinbase or MetaMask. By collecting personal information from leaked databases, social media, and even blockchain records, they create convincing scams. For example, in early 2024, a phishing attempt using AI targeted Coinbase users, resulting in losses of nearly $65 million.
  • Deepfake Scams: Imagine viewing a video of a respected crypto influencer or CEO urging you to invest—only to discover that it’s entirely fake. This is the reality of deepfake scams driven by AI. These bots produce ultra-realistic videos and audio, fooling even experienced crypto holders into transferring funds.
  • Malware Attacks: In 2022, specific malware targeted browser-based wallets like MetaMask. A variant called Mars Stealer could extract private keys from over 40 different wallet browser extensions and 2FA applications, draining any funds it detected. Often, this malware finds its way onto systems via phishing links, counterfeit software downloads, or pirated crypto tools.

Exploiting Smart Contracts

Vulnerabilities in smart contracts offer a prime opportunity for hackers, and AI bots are taking advantage of them quicker than ever. These bots continuously scavenge through platforms like Ethereum or BNB Smart Chain, searching for flaws in newly launched DeFi projects. Once they spot a vulnerability, they exploit it automatically, usually within minutes. Researchers have shown that AI chatbots, such as those powered by GPT-3, can analyze smart contract code for exploitable weaknesses. For instance, Stephen Tong, co-founder of Zellic, demonstrated an AI chatbot that identified a flaw in a smart contract’s “withdraw” function, which was similar to that exploited in the Fei Protocol attack, resulting in an $80 million loss.

Brute-force attacks, which once took a long time to execute, are now alarmingly efficient due to AI bots. By scrutinizing previous password breaches, these bots swiftly identify patterns to crack passwords and seed phrases in record time. A 2024 study on desktop cryptocurrency wallets, including Sparrow, Etherwall, and Bither, found that weak passwords significantly reduce resistance to brute-force attacks, underscoring the importance of strong, complex passwords for protecting digital assets.

AI in Trading Bots and Market Manipulation

AI is also being leveraged in the realm of cryptocurrency trading bots—often as a buzzword to con investors, and occasionally as tools for technical exploits. A notable instance is YieldTrust.ai, which in 2023 promoted an AI bot that supposedly generated 2.2% returns per day—an astronomical and unrealistic profit. Authorities from several states investigated and determined that there was no evidence the “AI bot” even existed; it appeared to be a standard Ponzi scheme, utilizing AI as a trendy term to attract victims. YieldTrust.ai was eventually shut down by authorities, but not before deceiving investors with its polished marketing.

Even when automated trading bots are legitimate, they are frequently not the money-making machines that scammers advertise. For instance, blockchain analysis firm Arkham Intelligence highlighted a situation where an alleged arbitrage trading bot (likely presented as AI-driven) conducted a convoluted series of trades, including a $200 million flash loan—and ended up netting a mere $3.24 in profit. Many “AI trading” scams may simply take your deposit, undertake random trades (if any at all), and then provide excuses when you attempt to withdraw funds. Some dubious operators even deploy social media AI bots to fabricate a track record (such as fake testimonials or bots continually posting “winning trades”) to create an illusion of success.

On a more technical note, criminals use automated bots (not necessarily labeled as AI, but sometimes referred to as such) to exploit the crypto markets and infrastructure. For example, front-running bots in DeFi automatically insert themselves into pending transactions to extract a small amount of value through a sandwich attack, while flash loan bots perform rapid trades to capitalize on price discrepancies or unprotected smart contracts. While these strategies require coding skills and are generally not marketed to victims, they serve as direct theft tools for hackers. AI could improve these bots by optimizing strategies more rapidly than humans; however, as noted, even the most advanced bots do not guarantee substantial gains. The markets remain competitive and unpredictable, a reality that even the most sophisticated AI cannot foresee reliably.

The danger to victims is very real: if a trading algorithm malfunctions or is designed with malicious intent, it can wipe out your funds within seconds. There are documented cases of rogue bots on exchanges that triggered flash crashes or drained liquidity pools, causing significant losses for users.

The AI-Driven Erosion of Security

AI is teaching cybercriminals how to infiltrate crypto platforms, enabling an influx of less-skilled attackers to launch credible assaults. This accounts for why crypto phishing and malware campaigns have multiplied so rapidly—AI tools allow bad actors to automate their scams and continuously fine-tune them based on effectiveness.

Additionally, AI is intensifying malware threats and hacking strategies aimed at cryptocurrency users. One area of concern is AI-generated malware—malicious software that uses AI to adapt and evade detection. In 2023, researchers introduced a proof-of-concept known as BlackMamba, a polymorphic keylogger utilizing an AI language model (similar to the technology behind ChatGPT) to modify its code with each execution. This means that every time BlackMamba operates, it generates a new variant within memory, assisting it in bypassing antivirus and endpoint security measures. In trials, this AI-generated malware went undetected by a leading endpoint detection and response system. Once active, it could covertly capture everything the user types—including passwords for crypto exchanges or seed phrases for wallets—and transmit that data to attackers.

While BlackMamba is merely a laboratory demonstration, it underscores a significant threat: criminals can leverage AI to develop shape-shifting malware that specifically targets cryptocurrency accounts and is far more challenging to detect than traditional viruses. Even in the absence of sophisticated AI malware, threat actors exploit the popularity of AI to proliferate classic Trojans. Scammers routinely establish fake “ChatGPT” or AI-related applications containing malware, knowing that users might lower their defenses due to the AI branding. For instance, security experts have identified fraudulent sites faking the ChatGPT website with a “Download for Windows” button; if clicked, it silently installs a crypto-stealing Trojan on the victim’s device.

Beyond the malware itself, AI is diminishing the skill threshold for prospective hackers. Previously, a criminal required some technical knowledge to create phishing sites or viruses. Now, underground “AI-as-a-service” offerings handle much of the labor. Illicit AI chatbots like WormGPT and FraudGPT have emerged on dark web forums, providing users with capabilities to generate phishing emails, malware code, and hacking advice on demand. For a fee, even those without technical expertise can harness these AI bots to produce convincing scam sites, craft new malware variants, and search for software vulnerabilities.

Share This Article