In a significant strategic pivot, Uber has announced a major expansion of its partnership with Amazon Web Services (AWS), signaling a deeper commitment to Amazon's proprietary AI and compute chips. The move, confirmed on Tuesday, April 30, 2025, represents a notable shift for the ride-hailing giant, which had previously embarked on a high-profile migration to Oracle Cloud Infrastructure and Google Cloud Platform.
The expanded contract centers on two key silicon technologies: Graviton and Trainium. Uber plans to significantly increase its deployment of AWS's Graviton processors, which are low-power, ARM-based server CPUs for general cloud computing. Furthermore, the company will initiate a new trial phase for Trainium3, AWS's latest-generation chip engineered specifically for training artificial intelligence models, positioning it as a direct competitor to offerings from industry leader Nvidia.
This development is intriguing given Uber's very public cloud roadmap. In February 2023, the company announced landmark, multi-year agreements with both Oracle and Google Cloud to transition from on-premise infrastructure to a dual-cloud environment. As recently as December 2024, Uber reiterated this commitment, highlighting its work with Arm-powered compute instances from Ampere Computing on Oracle's cloud.
The narrative is complex due to interconnected Silicon Valley relationships. Uber's previous reliance on Oracle's cloud involved chips from Ampere Computing. In December 2024, SoftBank acquired Ampere, and Oracle divested its stake, realizing a substantial pre-tax gain of $2.7 billion. Oracle Chairman Larry Ellison publicly stated that in-house chip design was no longer viewed as a core competitive advantage, with Oracle pivoting to securing massive supply deals with Nvidia.
Uber's decision aligns with a broader industry trend. Major technology firms, including Anthropic, OpenAI, and Apple, have also signed or expanded agreements with AWS, citing the performance and cost-efficiency of its proprietary chips. Amazon CEO Andy Jassy revealed in December that the Trainium business line alone had already reached multibillion-dollar revenue scale.
For Uber, the migration involves substantial technical complexity, shifting massive workloads while transitioning from an x86-dominated environment to one powered by ARM architecture. The potential benefits are compelling: reduced compute costs, improved performance per watt, and early access to specialized AI training hardware, which could accelerate machine learning initiatives in route optimization, dynamic pricing, and autonomous vehicle research.
This shift highlights the evolving nature of enterprise cloud contracts, where flexibility and access to cutting-edge hardware are becoming as important as baseline storage and compute. The move intensifies pressure on other cloud providers to demonstrate similar innovation, underscoring that the battle for cloud supremacy is increasingly being fought at the silicon level.