Amazon has agreed to make an investment of $5bn in Anthropic, with the possibility of up to a further $20bn in future investment tied to specific commercial targets.

Alongside the previous commitment of $8bn, Amazon’s total potential investment in the Claude developer would rise to $33bn.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

Additionally, the two firms have expanded their partnership with an arrangement that will see Anthropic secure up to 5GW of Amazon’s Trainium chips, including present and future generations. This move is expected to support the development and deployment of Anthropic’s advanced AI models.

Anthropic has also pledged to spend more than $100bn over the next decade on AWS technologies. This will cover current and future Trainium chips, as well as tens of millions of Graviton CPU cores.

Anthropic’s AI models, including the Claude family, will use these chips for training and inference workloads.

Significant Trainium3 chip capacity will be available later this year, as part of the expanded infrastructure agreement.

The companies also plan to extend international inference capabilities across Asia and Europe to support Anthropic’s growing global user base.

Currently, more than 100,000 organisations are said to use Anthropic’s Claude models through AWS, making it a widely adopted model family on Amazon’s Bedrock inference service.

Anthropic CEO and co-founder Dario Amodei said: “Our users tell us Claude is increasingly essential to how they work, and we need to build the infrastructure to keep pace with rapidly growing demand.

“Our collaboration with Amazon will allow us to continue advancing AI research while delivering Claude to our customers, including the more than 100,000 building on AWS.”

Currently, AWS runs most of its inference on Trainium chips.

According to Amazon, both Trainium and Graviton technologies are used by over 100,000 customers each.

Anthropic uses AWS as its main cloud and training provider for mission-critical workloads.

As part of the agreement, AWS customers now have direct access to Anthropic’s full Claude Platform from their existing AWS accounts. This eliminates the need for separate contracts, credentials, or billing processes.

The integration allows users to employ AWS’ existing access controls and monitoring systems.

Whether customers access the Claude Platform on AWS or Anthropic’s models through Amazon Bedrock, both companies aim to streamline access to AI tools for customers.

Amazon CEO Andy Jassy said: “Anthropic’s commitment to run its large language models on AWS Trainium for the next decade reflects the progress we’ve made together on custom silicon, as we continue delivering the technology and infrastructure our customers need to build with generative AI.”

In addition, Anthropic and AWS continue to collaborate on large-scale infrastructure, such as Project Rainier, which includes one of the world’s largest AI compute clusters, equipped with roughly half a million Trainium2 chips.

Project Rainier is used to train and deploy Claude models globally and to build future versions.

Anthropic works with Amazon’s Annapurna Labs to provide feedback on Trainium chip design, influencing future iterations to meet the demands of frontier AI models.

Since the start of their partnership, Amazon and Anthropic have focused on enabling large-scale adoption of generative AI across sectors, providing technology and infrastructure to help customers build and scale AI-driven solutions.