According to a new report from blockchain intelligence firm TRM Labs, artificial intelligence (AI)-powered cryptocurrency fraud activity has skyrocketed by 500% over the past year. The report details how generative AI is dramatically boosting the scale, speed, and sophistication of crypto-related crimes, including phishing, identity theft, and money laundering.
AI has fundamentally transformed the mechanics of large-scale crypto scams. Previously reliant on human-operated call centers, fraudsters now leverage AI to automate, customize, and expand their operations. Large language models (LLMs) enable the rapid creation of personalized phishing emails and fake investment websites, while AI translation tools help localize scams across different languages and regions. Deepfake audio and video technology is being used to impersonate executives or romantic partners with "increasing realism," accelerating trust-building in schemes like pig butchering and romance scams.
Machine learning tools are also weaponized for direct attacks. They can test stolen credentials in bulk, brute-force seed phrases and private keys to steal funds within seconds, and even identify vulnerabilities in smart contracts to facilitate attacks on entire blockchain protocols. "Artificial intelligence does not merely increase outreach volume; it accelerates the entire fraud lifecycle," the TRM Labs report stated.
The devastating impact is reflected in recent high-profile incidents. Last month, a crypto whale lost 1,459 BTC and 2.05 million LTC (worth approximately $282 million) to a social engineering scam. In a separate 2024 case, a deepfake scam led an employee to transfer around $26 million. Just two days ago, U.S. federal agents seized over $61 million in Tether (USDT) linked to wallets laundering proceeds from pig butchering schemes.
Illicit crypto transaction volumes hit a record $158 billion last year, a 145% increase from the prior year, with roughly $30 billion attributed to scams. Other firms corroborate the alarming trend. Vectra AI reported a 1,210% jump in AI scams, projecting losses could reach $40 billion by next year. Chainalysis noted in a recent post that "AI‑enabled scams pulled in 4.5× more revenue per operation and generated 9× more daily transactions than non‑AI scams," urging defenders to adopt similar AI tools to combat the threat.
Amidst this backdrop of AI-fueled fraud, a contrasting narrative emerged from the Strategy CEO, who framed Bitcoin as a "self-curing" economic loop in contrast to what he described as AI's "self-reinforcing doom loop." He argued Bitcoin's transparent, rule-based, and mathematically capped monetary policy—exemplified by its predictable halving cycles—provides a stabilizing, predictable counterforce in a digital economy increasingly perceived as destabilized by autonomous and error-amplifying AI systems. This perspective is influencing institutional digital asset strategies as investors weigh the high-growth volatility of AI against Bitcoin's rule-bound scarcity.