Google’s Threat Intelligence Group has issued a stark warning: hackers are now leveraging artificial intelligence to craft advanced exploits that can sidestep multi-factor authentication (MFA) systems. This development poses a direct threat to the cryptocurrency sector, where exchanges, wallets, and DeFi platforms heavily rely on 2FA to protect user funds.
According to a report published on Google’s Cloud Blog, analysts identified what they believe is the first zero-day exploit built with AI assistance. A criminal hacking group used an AI model to write a Python script that bypassed 2FA in an open-source web administration tool. Google collaborated with the vendor to halt mass exploitation before it escalated.
The script contained telltale signs of AI generation: unusually detailed educational docstrings, a hallucinated CVSS severity score, and formatting typical of large language model outputs. Google stated, “Based on the structure and content of these exploits, we have high confidence that the actor likely leveraged an AI model to support the discovery and weaponization of this vulnerability.”
The threat extends beyond a single incident. State‑backed groups are increasingly turning to AI for cyber operations:
Chinese threat actors: The group UNC2814 employed persona-driven jailbreaking, instructing an AI model to act as a senior security auditor. They then directed it to search TP‑Link embedded device firmware and Odette File Transfer Protocol implementations for remote code execution vulnerabilities. Another China-linked group used tools like Strix and Hexstrike to target a Japanese tech firm and an East Asian cybersecurity company.
North Korea’s APT45: This group sent thousands of repetitive prompts to recursively analyze known CVE entries and validate proof-of-concept exploits, generating a robust, AI‑assisted arsenal that would be impractical to manage manually.
Russian hackers: Suspected Russian actors have used AI to develop polymorphic malware and obfuscation networks, accelerating development cycles and evading detection.
Google also flagged a new malware type, PROMPTSPY, which uses AI models to interpret system states and dynamically generate commands, allowing attackers to hand off operational decisions to the AI. Additionally, hackers increasingly procure anonymized premium access to language models via specialized middleware, bypassing restrictions through trial account abuse. A group tracked as TeamPCP (UNC6780) has begun compromising AI software dependencies to establish footholds for ransomware.
In response, Google is deploying defensive AI tools. Its Big Sleep agent identifies software vulnerabilities, and CodeMender uses Gemini’s reasoning to automatically patch flaws. Accounts caught misusing Gemini for malicious intent are disabled.
For crypto users, the implications are serious. The bypassing of 2FA threatens the security of exchange accounts and personal wallets. As AI‑driven attacks become more sophisticated, the industry will need to adopt equally advanced defenses, including hardware security keys and behavioral biometrics, to safeguard assets.