Microsoft was an early backer of OpenAI, investing $1bn in the startup in 2019. Now, with reports that it is training an AI model bigger than Google Gemini, will Microsoft’s investment in AI finally pay off? 

News of Microsoft’s new in-house AI model, named MAI-1, was first reported by The Information on 6 May citing people familiar with the matter. MAI-1 is reported to have 500 billion parameters and its development is overseen by Mustafa Suleyman, former CEO of AI development company Inflection and Google DeepMind’s cofounder. A use case for MAI-1 has not yet been publicly disclosed.

Google’s Gemini and Gemini Pro, in comparison, consist of two and seven billion parameters respectively. 

While MAI-1 was not first developed by Google DeepMind, The Information reported that its training data may have been built from the startup. 

Microsoft first hired Suleyman as the CEO of Microsoft AI in March 2024 in an attempt to establish dominance in its AI-related hiring. Suleyman has since been Microsoft’s lead in all of its AI consumer products, including Copilot and Bing. Prior to joining Microsoft, Suleyman worked at DeepMind for over 9 years.  

AI talent is becoming increasingly scarce as companies race to develop their own models, leading to talent poaching and cases of acqui-hiring, the process of acquiring a company primarily for its workforce. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Outside of hiring key AI players, Microsoft first got its head start by investing heavily into OpenAI according to a report from research and analysis company GlobalData. 

In its 2024 executive briefing into AI, GlobalData reported that Microsoft’s extended partnership and $10bn total investment in OpenAI granted it significant leeway to dominating the generative AI space. 

This relationship with OpenAI has enabled Microsoft to leverage its GPT large language model (LLM) into Microsoft products like Bing and Edge. However, Microsoft also provides users access to LLMs and other AI models through its Azure service. 

However, The Information’s report stated that MAI-1 was also intended to be in competition with OpenAI, potentially signalling a future independent Microsoft AI model that it could monetise without the need for GPT. 

Chief product and technology officer at software company xDesign, Jeff Watkins, questioned whether Microsoft would really seek total independence from GPT. 

“At 500 billion parameters, this will be Microsoft’s largest home-grown model, but still behind OpenAI’s GPT-4 which is reportedly 1 trillion plus parameter,” he stated. 

“One might question why they’re doing this, given their existing alliances, and there’s several potential reasons, firstly being the ongoing regulatory challenges with their potential over-investment into OpenAI,” Watkins explained, referring to an in-depth review of Microsoft’s OpenAI investment by the EU announced in January

“An organisation the size of Microsoft likely is nervous of not having full ownership of their destiny in this field,” Watkins continued, “This move means that Microsoft has their own skin in the game, in-house, which will remain their own IP, no matter the outcomes of their dealings with OpenAI.” 

Watkins maintained that, if true, the release of MAI-1 would be an impressive step for Microsoft in AI. 

“But will this pay off for them? I think it will, and sooner than you’d imagine,” he concluded, “It won’t necessarily be through consumer use directly, but through integration into their products and services, allowing them to specialise the model (or models) to meet their needs, unconstrained.” 

Big Tech companies have already begun attempts to establish independence in AI chips, with Meta, Google and Intel all releasing AI chips during the same week in early April. 

Microsoft itself released two AI specific chips in November 2023, the Azure Maia AI Accelerator and Azure Cobalt CPU. The chips were the first custom AI chip designed by AI and were designed in-house. 

Announcing the chips, Microsoft did not state how they compared to NVIDIA’s leading H100 chips currently used in the training of many forefront AI models. However, it did specify a desire to diversify its chip supply chain. 

The BBC reported in 2023 that ChatGPT required around 10,000 of Nvidia’s GPUs to run and answer user prompts. 

GlobalData previously forecast that the global AI race would pivot to become an AI chips arms race, suggesting that more AI developers would seek to develop their own hardware to prevent dependence on Nvidia and future chip shortages. 

Chief analyst and practice lead at GlobalData, Rena Bhattacharyya, explained MAI-1’s impact on Microsoft’s future lead. 

“[MAI-1] positions Microsoft as an innovator in the generative AI space and ensures that the hyperscale cloud provider remains a front runner in what is becoming an increasingly competitive market,” she begins, “Microsoft is wise to offer customers a range of model options since in the future, businesses will embrace a multi-model environment to optimise performance and cost.” 

Bhattacharyya explained that a future release of a large AI model like MAI-1 would complement Microsoft’s existing small AI model offerings, like its Phi-3 suite of models. 

“By offering a mix of large language models and small language models, Microsoft ensures it can meet a range of customer requirements and thereby reduce customer churn,” she said.