Google Unveils 8th-Gen TPU Chips and $750M AI Fund to Accelerate Agentic AI Adoption

1 hour ago 2 sources neutral

Google Cloud has made a significant dual announcement at its Cloud Next 2026 conference, revealing a major hardware advancement and a substantial financial commitment to drive the adoption of agentic artificial intelligence. The company unveiled its 8th-generation Tensor Processing Units (TPUs), splitting the lineup for the first time into separate chips for AI model training and inference. Simultaneously, it launched a $750 million fund to support its consulting and enterprise partners in deploying AI solutions.

The new TPU chips, co-developed with Broadcom and designed alongside Google DeepMind, represent a strategic shift. Previous TPU generations handled both training and inference on a single chip. The rise of "agentic AI"—where models operate in continuous loops with minimal human input—prompted the move to specialized hardware. The TPU 8t is designed for training AI models, while the TPU 8i is built for inference, the process of running trained models in production.

Google claims the TPU 8i inference chip delivers an 80% improvement in performance-per-dollar compared to its predecessor, Ironwood. It contains 384 megabytes of SRAM per chip—triple the amount in Ironwood—which Google says eliminates latency during high user demand. The chip also offers up to two times better performance-per-watt. The training chip, TPU 8t, scales to superpods of 9,600 chips with 2 petabytes of high-bandwidth memory, doubling the interchip bandwidth of Ironwood. Google states this can cut frontier model development time "from months to weeks" and delivers 2.8 times the performance of the seventh generation at the same price.

In a parallel move to bolster its ecosystem, Google Cloud announced a $750 million fund. The initiative is aimed at global consulting firms, systems integrators, software vendors, and channel partners to help them develop and deploy agentic AI solutions. The funding will support activities like identifying AI use cases, building and testing agentic systems, and training teams. A portion will also fund the placement of Google's forward-deployed engineers within partner organizations, including major firms like Accenture, Capgemini, Deloitte, and TCS.

Partners will gain expanded access to tools and resources, including AI value assessments, Gemini proofs-of-concept, and early access to Gemini models for select consulting giants like Accenture, Bain & Company, BCG, Deloitte, and McKinsey. The investment also supports the rollout of enterprise-ready AI agents within the Gemini Enterprise Agent Platform, with expected integrations from companies like Adobe, Salesforce, and ServiceNow.

Google Cloud CEO Thomas Kurian framed the announcements as the culmination of over a year of work to prepare an "agent-ready" technology stack, highlighting the industry shift from models that answer questions to those that perform tasks. Both the TPU 8t and TPU 8i chips will be available to Google Cloud customers later this year. The company continues to use Nvidia GPUs and confirmed it will be among the first to offer Nvidia's upcoming Vera Rubin platform.

Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.