Meta to Buy Millions of Nvidia Blackwell and Rubin Chips in Massive Deal
Meta Platforms has signed a multiyear agreement with Nvidia to purchase millions of AI chips, including Blackwell and upcoming Rubin GPUs. The deal, estimated to be worth tens of billions of dollars, marks the first large-scale deployment of Nvidia’s standalone Grace CPUs as Meta scales its infrastructure for personal superintelligence.
Mentioned
Key Intelligence
Key Facts
- 1Multiyear deal involves the purchase of millions of Nvidia AI chips
- 2Estimated deal value reaches into the tens of billions of dollars
- 3Includes current Blackwell GPUs and forthcoming Rubin GPU architecture
- 4Marks the first large-scale deployment of standalone Nvidia Grace CPUs
- 5Infrastructure designed to power 'personal superintelligence' for billions of users
- 6Focuses on significant performance-per-watt improvements for data centers
| Hardware | |||
|---|---|---|---|
| Blackwell | Current | GPU | AI Training & Inference |
| Rubin | Future | GPU | Next-gen AI Scaling |
| Grace | Current | CPU | High-efficiency Compute |
| Vera | Future | CPU | Next-gen CPU Architecture |
Who's Affected
Analysis
The partnership between Meta Platforms and Nvidia has entered a transformative new phase, as the social media giant commits to a multiyear procurement deal for millions of AI chips. This agreement, estimated by analysts to be worth tens of billions of dollars, represents one of the largest single commitments to AI infrastructure in history. While Meta has long been Nvidia’s most significant customer, the scope of this deal—encompassing both current Blackwell GPUs and the forthcoming Rubin architecture—underscores Meta’s aggressive pursuit of personal superintelligence for its billions of users across Facebook, Instagram, and WhatsApp.
A critical technical shift in this agreement is Meta’s first large-scale deployment of Nvidia’s standalone Grace and Vera central processors (CPUs). Historically, Meta and other hyperscalers have paired Nvidia GPUs with x86 processors from Intel or AMD. By shifting to a Grace-only deployment, Meta is signaling a move toward a more integrated, ARM-based architecture. Nvidia claims this transition will deliver significant performance-per-watt improvements, a vital metric as Meta scales its data centers to handle the massive power demands of next-generation large language models (LLMs). This move also tightens Nvidia’s grip on the data center stack, moving beyond the GPU to control the primary compute engine.
The inclusion of the Rubin architecture—Nvidia’s next-generation platform expected to succeed Blackwell—suggests that Meta is already planning for the training of Llama 5 and beyond.
The strategic timing of this deal is particularly noteworthy. Meta is currently in a high-stakes race with OpenAI, Google, and Microsoft to define the next era of generative AI. By securing a massive supply of Blackwell and Rubin chips, Meta is effectively future-proofing its infrastructure against potential supply chain bottlenecks. The inclusion of the Rubin architecture—Nvidia’s next-generation platform expected to succeed Blackwell—suggests that Meta is already planning for the training of Llama 5 and beyond. This massive hardware moat allows Meta to continue its strategy of releasing high-performing open-source models while simultaneously powering its consumer-facing AI features.
Furthermore, the deal highlights the limitations of internal chip development for even the largest tech companies. Despite Meta’s significant investment in its own MTIA (Meta Training and Inference Accelerator) chips, the company remains deeply dependent on Nvidia for its most intensive training tasks. The sheer scale of this purchase suggests that while internal silicon may handle specific inference workloads, Nvidia’s ecosystem—comprising both hardware and the CUDA software layer—remains the industry standard for cutting-edge AI development.
For Nvidia, this deal reinforces its dominance in the AI hardware market at a time when some investors have questioned the sustainability of hyperscaler spending. A multiyear commitment of this magnitude provides Nvidia with significant revenue visibility and cements its role as the primary architect of the AI era. For Meta, the investment is a clear signal to Wall Street that it views AI not just as a feature, but as the fundamental substrate for its future growth. The focus on personal superintelligence suggests Meta envisions a future where AI agents are deeply integrated into every interaction on its platforms, requiring a level of compute density that only a massive, multi-generational hardware build-out can provide.