Blockchain security firm SlowMist has issued a severe security warning for the OpenClaw AI ecosystem, revealing a large-scale supply chain poisoning attack within its ClawHub plugin marketplace. The discovery was made after security company Koi Security scanned a total of 2,857 available "skills" (plugins) and flagged 341 of them as malicious, meaning approximately 12% of the scanned plugins contained harmful code.
The attack exploited weak review processes in the plugin store, allowing hackers to upload skills that appeared normal on the surface but contained hidden, obfuscated instructions. According to SlowMist's analysis, many of these malicious skills employed a two-stage attack method. The first stage involved the plugin containing commands that often masqueraded as routine setup or dependency installation steps. These commands would then decode and execute hidden scripts.
The second stage involved downloading the actual malicious payload. The code would fetch data from hardcoded domains or IP addresses and subsequently run malware on the victim's system. One cited example was a skill named "X (Twitter) Trends," which seemed harmless and useful but concealed a Base64-encoded backdoor capable of stealing passwords, collecting files, and exfiltrating them to a remote server.
The scale of the attack has alarmed analysts, given OpenClaw's rapid growth in recent months, which has attracted many developers with its open-source agent tools but also made it a more attractive target. Koi Security linked most of the malicious skills to a single large campaign. SlowMist further analyzed over 400 Indicators of Compromise (IOCs), revealing evidence of organized, batch uploads, with many plugins sharing the same domains and infrastructure.
The risks for users who ran these skills were severe, as some plugins requested shell access or file permissions, granting malware the opportunity to steal credentials, documents, and API keys. Some fake skills impersonated familiar tools like crypto utilities, YouTube helpers, or automation assistants to avoid suspicion during installation.
Security researchers have already begun cleanup efforts. SlowMist reported hundreds of suspicious items in early scans, while Koi Security has released a free scanner for OpenClaw skills. Experts are now warning users against blindly running plugin commands and to avoid skills that request passwords or broad system access. Developers are advised to test plugins in isolated environments, emphasizing that independent scans and official sources should be the first line of defense. This incident highlights the inherent risks in rapidly growing AI ecosystems where marketplace velocity can outpace security controls.