Amazon Web Services (AWS) and OpenAI have signed a multi-year agreement in which the former will supply the infrastructure underpinning the ChatGPT developer’s primary AI workloads.

The deal, valued at $38bn with planned expansion over seven years, grants OpenAI access to AWS compute infrastructure comprising hundreds of thousands of Nvidia graphics processing units (GPUs) and scalable capacity to tens of millions of central processing units (CPUs).

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

OpenAI will leverage this immediately, with all targeted capacity set to be in place before the end of 2026 and possible scaling through 2027 and beyond.

OpenAI co-founder and CEO Sam Altman said: “Our partnership with AWS strengthens the broad compute ecosystem that will power this next era and bring advanced AI to everyone.”

The technical framework AWS is deploying includes clusters of Nvidia GB200 and GB300 GPUs provisioned through Amazon EC2 UltraServers on a unified network layer.

This setup supports low-latency communication between systems and is designed to run both inference for products such as ChatGPT and training for emerging model architectures.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

AWS, which has managed clusters exceeding 500,000 chips, will use its existing technical expertise to provide secure and reliable infrastructure at scale.

This collaboration builds on previous work between the two companies, including the recent addition of OpenAI’s open weight foundation models to Amazon Bedrock.

These models are being used by a range of customers on AWS, with OpenAI now among the most utilised public model providers on the platform.

Organisations such as Bystreet, Peloton, Comscore, Triomics, Thomson Reuters, and Verana Health are among those incorporating OpenAI models into agentic workflows, code generation, scientific research, and mathematical processing.

AWS CEO Matt Garman said: “As OpenAI continues to push the boundaries of what’s possible, AWS’s best-in-class infrastructure will serve as a backbone for their AI ambitions.

“The breadth and immediate availability of optimised compute demonstrates why AWS is uniquely positioned to support OpenAI’s vast AI workloads.”

In a separate development, Verizon Business has announced an agreement with AWS to deliver fibre connectivity between AWS data centre regions.

The Verizon AI Connect solution will deliver resilient network paths for AWS, enhancing the performance and reliability of AI workloads by leveraging Verizon’s network.

The company said that it would construct new long-haul fibre pathways to support advanced AI applications running at scale on AWS.

Verizon Business chief product officer and senior vice president Scott Lawrence said: “AI will be essential to the future of business and society, driving innovation that demands a network to match.

“This deal with Amazon demonstrates our continued commitment to meet the growing demands of AI workloads for the businesses and developers building our future.”