Microsoft has signed a deal with specialised cloud infrastructure provider CoreWeave, CNBC reported, citing sources.

Under the alliance, the tech major plans to spend billions of dollars over several years on CoreWeave’s infrastructure.

Set up in 2017, CoreWeave provides clients access to Nvidia’s graphics processing units, or GPUs, which are used to run artificial intelligence (AI) models.

OpenAI, the company behind AI chatbot ChatGPT, has received billions of dollars in funding from Microsoft.

According to one of the sources, Microsoft signed the CoreWeave agreement early this year to make sure that OpenAI would have sufficient computer capacity moving ahead.

Currently, OpenAI uses Microsoft’s Azure cloud platform for its computing requirements.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData

With ChatGPT’s release last year, the race to adopt generative AI gained momentum and several companies, including Google, quickly incorporated the technology into their offerings.

As the demand for AI infrastructure grows Microsoft is exploring new ways to leverage Nvidia’s GPUs.

The news comes after CoreWeave raised $200m in funding earlier this week, which was an extension of its $221m Series B funding round announced in April this year.

CoreWeave, which also counts Nvidia as an investor, said it plans to use the funding to expand its cloud infrastructure.

Announcing the funding in April, CoreWeave CEO and co-founder Michael Intrator said: “CoreWeave is uniquely positioned to power the seemingly overnight boom in AI technology with our ability to innovate and iterate more quickly than the hyperscalers.