Nvidia has expanded its partnership with CoreWeave to support the construction of more than five gigawatts (GW) of AI factories by 2030.

The initiative aims to facilitate broader adoption of AI technologies globally.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

As part of the agreement, Nvidia has purchased $2bn in CoreWeave Class A common stock at $87.20 per share.

The collaboration will see CoreWeave develop and operate AI factories using Nvidia’s accelerated computing platform technology to address the increasing demand for computational resources required for AI.

Nvidia will provide financial support to assist CoreWeave in securing land, power, and infrastructure needed to build these facilities.

Both companies plan to test and validate CoreWeave’s AI-native software and reference architectures, such as SUNK and Mission Control, with a view to integrating them into Nvidia’s reference architectures for cloud service providers and enterprise clients.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

Nvidia founder and CEO Jensen Huang said: “AI is entering its next frontier and driving the largest infrastructure buildout in human history.

“CoreWeave’s deep AI factory expertise, platform software and unmatched execution velocity are recognised across the industry. Together, we’re racing to meet extraordinary demand for Nvidia AI factories — the foundation of the AI industrial revolution.”

CoreWeave will also deploy multiple generations of Nvidia hardware across its cloud platform, including upcoming Nvidia Rubin computing architecture, Vera CPUs and BlueField storage systems.

CoreWeave co-founder, chairman and CEO Michael Intrator said: “From the very beginning, our collaboration has been guided by a simple conviction: AI succeeds when software, infrastructure and operations are designed together.

“Nvidia is the leading and most requested computing platform at every phase of AI — from pre-training to post-training — and Blackwell provides the lowest cost architecture for inference.

“This expanded collaboration underscores the strength of demand we are seeing across our customer base and the broader market signals as AI systems move into large-scale production.” 

The expanded relationship builds on CoreWeave’s existing cloud and operational expertise, with the goal of providing customers with reliable access to computing resources capable of handling demanding AI workloads.

Earlier this month, Nvidia introduced the Rubin platform, which includes six chips intended for AI supercomputing infrastructure.

The suite features the Vera CPU, Rubin GPU, ConnectX-9, NVLink 6 Switch, SuperNIC, BlueField-4 DPU and Spectrum-6 Ethernet Switch.