OpenAI has committed to over $10 billion in chip and cloud infrastructure deals, strategically bypassing major tech players like Intel, Google, and Amazon to avoid reliance and accelerate scaling. This move is a deliberate effort to diversify its hardware supply chain and maintain control over its AI development stack.
The cornerstone of this strategy is a newly signed $10 billion agreement with chipmaker Cerebras, which plans to go public. OpenAI will utilize 750 megawatts of Cerebras chips through 2028 to power its large language models and intensive workloads. This follows a massive $1.4 trillion infrastructure initiative last year that involved partnerships with Nvidia, AMD, and Broadcom, contributing to OpenAI's private market valuation reaching $500 billion.
Despite Nvidia CEO Jensen Huang's recent statement that "Everything that OpenAI does runs on Nvidia today," OpenAI is spreading its bets. In September, Nvidia committed $100 billion to help build 10 gigawatts of systems for OpenAI, an energy equivalent to powering 8 million homes annually, requiring an estimated 4 to 5 million GPUs. However, just hours after this announcement, OpenAI revealed a separate 10-gigawatt chip deal with Broadcom. These are custom AI accelerators, known as XPUs, co-developed with OpenAI for over a year. The Broadcom deal significantly boosted its market value to over $1.6 trillion.
Meanwhile, other tech giants are largely sidelined. While OpenAI signed a $38 billion cloud deal with Amazon Web Services (AWS) in November, there is no commitment to use Amazon's proprietary Inferentia or Trainium chips. Similarly, a capacity deal with Google Cloud does not include the use of Google's Tensor Processing Units (TPUs), a refusal that stands even with Broadcom's involvement in manufacturing those chips. Intel, which passed on an early chance to invest and supply chips, is now trailing, with its new Crescent Island AI inference chip not slated for sampling until late 2026.