Uber is using Amazon’s custom chips to boost computing performance and train artificial intelligence (AI) models on Amazon Web Services (AWS).

In a statement, Amazon said that the ride-hailing ​firm is extending deployment of AWS Graviton processors across more of its Trip Serving Zones, the infrastructure that underpins every ride and delivery request on its platform.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The plan involves shifting the workloads to AWS Graviton4 chips to help it scale during demand surges while reducing energy use, lowering latency and managing costs.

According to Amazon, the performance of Graviton supports real-time calculations used to match riders with drivers more quickly, without “compromising reliability, availability, or security”.

Uber engineering vice-president Kamran Zargahi said: “Uber operates at a scale where milliseconds matter.

“Moving more Trip Serving workloads to AWS gives us the flexibility to match riders and drivers faster and handle delivery demand spikes without disruption.”

Additionally, Uber has also begun pilot projects to train selected AI models on AWS Trainium chips. It is using Trainium3 to train some of the AI models that support its apps.

The models process data from billions of trips to help assign drivers or couriers, estimate arrival times, and recommend delivery options.

With the use of new chips, the models are expected to improve matching speed, ETA accuracy, and personalisation as they learn from more trips.

AWS North America vice-president and managing director Rich Geraffo said: “Uber is one of the most demanding real-time applications in the world, and we’re proud to be an important part of the infrastructure powering their global operations.

“We’re helping Uber deliver the reliability hundreds of millions of people count on today—and the AI-powered experiences that will define ride-sharing and on-demand delivery tomorrow.”