AI Chatbots Leak Chats to Meta and TikTok While Chrome Deletes Privacy Promise

1 hour ago 1 sources neutral

Key takeaways:

  • Privacy violations by AI chatbots could drive capital toward anonymity coins like Monero.
  • Centralized AI data leaks strengthen the value case for decentralized AI tokens like FET.
  • Google's stealth Gemini deployment may intensify regulatory pressures, aiding data-sovereignty projects.

A new privacy storm hit the tech world on May 7, 2026, as two separate revelations exposed how AI platforms handle user data—or fail to. Researchers at IMDEA Networks Institute published a study showing that all four leading AI chatbots—ChatGPT, Claude, Grok, and Perplexity—embed more than 13 third‑party trackers from advertising networks operated by Meta, Google, and TikTok. At the same time, Google’s Chrome browser silently erased its own promise that on‑device AI would not send data to its servers.

The LeakyLM project, unveiled on May 4, discovered that none of the chatbot platforms disclose these trackers in plain language. The most basic leak is the conversation URL—a link that, on several services, is publicly accessible by default. Grok, Elon Musk’s chatbot from xAI, makes guest conversations entirely open without login, and TikTok’s tracker even received verbatim message content via Open Graph metadata. “Leaking a URL is not just metadata—it can be equivalent to leaking the conversation itself,” the researchers warn. Claude and ChatGPT restrict access unless a user shares a link, but they still send conversation URLs and advertising cookies to Meta and Google; Claude does so through Anthropic’s own servers, bypassing ad blockers. Perplexity removed its Meta tracker in April.

The study acknowledges it has not proven that Meta or Google read anyone’s chats, but the infrastructure exists. The findings were submitted to EU Data Protection Authorities on April 13, and xAI was notified on April 17—no company has responded.

Meanwhile, Chrome 148 removed the line “without sending your data to Google servers” from its on‑device AI settings. In Chrome 147, the description explicitly stated that AI features like scam detection run locally without data leaving the device. The new text now only says Chrome “can use AI models that run directly on your device.” This follows the discovery that Chrome had been silently downloading a 4 GB Gemini Nano model file (weights.bin) to qualifying devices without any opt‑in prompt or notification. Privacy researcher Alexander Hanff argues the silent download violates Article 5(3) of the EU ePrivacy Directive, which requires explicit consent before storing data on a user’s device. Deleting the privacy phrase removes one of Google’s key justifications for the stealthy install, but it does not alter the legal exposure.

Neither Google nor the chatbot creators have commented publicly. The LeakyLM team plans to extend its audit to Meta AI, Microsoft Copilot, and Google Gemini, which were excluded this round because they operate as both AI providers and advertising companies, creating a more complex threat model.

Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.