OpenAI Secures $10B in Chip Deals, Shifts Strategy Away from Big Tech Giants

6 hour ago 1 sources neutral

Key takeaways:

  • OpenAI's chip diversification strategy signals a structural shift away from Nvidia's AI hardware dominance.
  • Cerebras and Broadcom partnerships highlight the growing market for custom AI accelerators over generic GPUs.
  • Investors should monitor semiconductor stocks for volatility as OpenAI's moves reshape supply chain dynamics.

OpenAI has committed to over $10 billion in chip and cloud infrastructure deals, strategically bypassing major tech players like Intel, Google, and Amazon to avoid reliance and accelerate scaling. This move is a deliberate effort to diversify its hardware supply chain and maintain control over its AI development stack.

The cornerstone of this strategy is a newly signed $10 billion agreement with chipmaker Cerebras, which plans to go public. OpenAI will utilize 750 megawatts of Cerebras chips through 2028 to power its large language models and intensive workloads. This follows a massive $1.4 trillion infrastructure initiative last year that involved partnerships with Nvidia, AMD, and Broadcom, contributing to OpenAI's private market valuation reaching $500 billion.

Despite Nvidia CEO Jensen Huang's recent statement that "Everything that OpenAI does runs on Nvidia today," OpenAI is spreading its bets. In September, Nvidia committed $100 billion to help build 10 gigawatts of systems for OpenAI, an energy equivalent to powering 8 million homes annually, requiring an estimated 4 to 5 million GPUs. However, just hours after this announcement, OpenAI revealed a separate 10-gigawatt chip deal with Broadcom. These are custom AI accelerators, known as XPUs, co-developed with OpenAI for over a year. The Broadcom deal significantly boosted its market value to over $1.6 trillion.

Meanwhile, other tech giants are largely sidelined. While OpenAI signed a $38 billion cloud deal with Amazon Web Services (AWS) in November, there is no commitment to use Amazon's proprietary Inferentia or Trainium chips. Similarly, a capacity deal with Google Cloud does not include the use of Google's Tensor Processing Units (TPUs), a refusal that stands even with Broadcom's involvement in manufacturing those chips. Intel, which passed on an early chance to invest and supply chips, is now trailing, with its new Crescent Island AI inference chip not slated for sampling until late 2026.

Sources
Disclaimer

The content on this website is provided for information purposes only and does not constitute investment advice, an offer, or professional consultation. Crypto assets are high-risk and volatile — you may lose all funds. Some materials may include summaries and links to third-party sources; we are not responsible for their content or accuracy. Any decisions you make are at your own risk. Coinalertnews recommends independently verifying information and consulting with a professional before making any financial decisions based on this content.