Ethereum co-founder Vitalik Buterin and Ethereum Foundation AI lead Davide Crapis have published a detailed proposal for a new privacy model designed to protect users of AI chatbots. The core issue they address is that current AI systems expose sensitive user data because API calls can be logged, tracked, and linked back to real-world identities through methods like email logins or credit card payments.
The developers argue this creates significant risks, including user profiling, tracking, and potential legal exposure if chat logs are presented in court. They note that even blockchain payments, while offering anonymity, are impractical as they require a slow and costly on-chain transaction for every single request, creating a publicly visible record that compromises privacy.
To solve this, Buterin and Crapis propose a deposit-based system. A user would deposit funds into a smart contract once and could then make thousands of private API calls. This ensures the AI provider is paid while the user avoids repeatedly confirming their identity. The system leverages zero-knowledge cryptography to allow users to prove they have paid for a request without revealing who they are.
Key technical components of the proposal include Rate-Limit Nullifiers (RLN), which enable anonymous requests while catching protocol cheaters. Each request is assigned a ticket index, and the user must generate a ZK-STARK proof to demonstrate they are spending from their deposited balance and to calculate any refunds owed, as AI request costs can vary.
A unique nullifier is generated for each ticket to prevent double-spending by immediately identifying attempts to reuse the same ticket index. To address broader abuse—such as sending harmful prompts, jailbreaks, or requests for illegal content—the protocol incorporates a dual staking mechanism. One layer enforces strict mathematical rules (like preventing double-spending), while another layer enforces the AI provider's content policies, allowing malicious actors to be penalized without compromising the anonymity of honest users.
The developers emphasize that as AI usage grows daily, the industry can no longer ignore these privacy concerns. Their proposed model aims to safeguard user safety while allowing the technology to scale responsibly.