OpenAI and Oracle are set to equip their new data centre in Abilene, Texas, with Nvidia’s AI chips as part of their $100bn Stargate infrastructure venture, reported Bloomberg.

The Stargate joint venture was revealed by OpenAI, SoftBank Group, and Oracle at a White House event in January 2025. Other initial equity funders include US IT major Oracle, and MGX, which is the AI-focused investment vehicle of Abu Dhabi. 

The project involves an initial investment of $100bn, with plans to increase this to $500bn over the next four years.   

The Abilene data centre is projected to house 64,000 of Nvidia’s GB200 superchips by the end of 2026, with the rollout occurring in phases.  

The first 16,000 chips are expected to be installed by this summer.

The cost of the GB200 chips for the first Stargate facility could reach billions of dollars.

Although Nvidia has not disclosed the price of the GB200, CEO Jensen Huang mentioned last year that the less powerful B200 chip costs between $30,000 and $40,000 each.

In addition to the Texas site, OpenAI and SoftBank staff have explored locations in Pennsylvania, Wisconsin, and Oregon for potential future Stargate data centre campuses.

Salt Lake City, where Oracle already has cloud-computing capacity, is also a contender for expansion.

An OpenAI spokesperson confirmed to Bloomberg that the company is collaborating with Oracle on the design and development of the Abilene data centre.

Oracle will be responsible for acquiring and operating the supercomputer within the facility.

However, Oracle did not respond to Bloomberg’s request for comment, and Nvidia declined to provide any additional information.

Stargate is part of a competitive landscape where tech companies are enhancing build up capacity with Nvidia’s latest chips, primarily used for training and deploying generative AI models.

Elon Musk’s xAI recently signed a $5bn deal with Dell Technologies for AI servers for a supercomputer in Memphis.

Meanwhile, Meta Platforms aims to achieve computing power equivalent to 600,000 Nvidia H100s by the end of 2024.

CoreWeave, an AI-focused cloud provider, reported having over 250,000 Nvidia GPUs across 32 data centres in its recent public offering documentation.