Security researchers from the University of California have uncovered severe vulnerabilities in third-party Large Language Model (LLM) routers that pose a direct threat to cryptocurrency wallets and developer assets. A groundbreaking study, conducted between early 2024 and mid-2025, reveals how these AI integration tools, designed to streamline access to multiple AI providers, create dangerous attack vectors for malicious actors.
The research team conducted comprehensive testing on 428 LLM routers, including 28 paid services and 400 free alternatives gathered from public communities. Their findings are alarming: nine routers were actively injecting malicious code into developer environments, seventeen routers accessed the researchers' Amazon Web Services credentials without authorization, and, most critically, one router successfully executed a theft of Ethereum from a controlled test wallet. The total loss in the demonstration was under $50, but it proved the vulnerability is exploitable for significant financial theft.
LLM routers function as third-party API brokers that consolidate access to AI providers like OpenAI, Anthropic, and Google. Developers, particularly in blockchain, use them to streamline workflows when building smart contracts or applications. The core vulnerability stems from their architecture: they terminate encrypted connections, giving them full, plaintext access to every message passing through, including sensitive data like private keys and seed phrases when developers use AI coding assistants.
The paper details three primary attack vectors: Code Injection, where malicious code is inserted into AI-generated responses; Credential Harvesting, where authentication tokens are captured; and Data Interception, where private keys and seed phrases are extracted from AI interactions. The research also highlighted the added risk of "YOLO mode," a setting in many AI agent frameworks that allows AI to execute commands automatically without user confirmation, potentially running malicious injected instructions unchecked.
Co-author Chaofan Shou stated on X, "26 LLM routers are secretly injecting malicious tool calls and stealing creds. One drained our client $500k wallet." The researchers concluded that these routers sit on a critical trust boundary currently treated as safe by default.
In response, security experts and firms are updating guidelines. Recommendations for developers include: implementing API key rotation, using sandbox environments for testing, conducting manual code review of all AI-generated output, and, most importantly, never allowing private keys or seed phrases to pass through an AI agent session. The long-term solution proposed is for AI companies to cryptographically sign their responses, allowing verification of authenticity.