Elon Musk has announced the launch of his new AI start-up xAI, with the mission of taking on OpenAI’s ChatGPT – of which he was formerly a co-founder. 

The announcement follows Musk’s repeated warnings on the dangers of AI, which he says, if left unregulated, could lead to “civilisational destruction”.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

The Twitter CEO and Tesla founder recently joined calls for a pause in the development of AI. 

Musk went live on Twitter Spaces on Wednesday (12 July) to announce xAI, which he says will be on a mission to create a “maximally curious” AI. 

Musk said his new AI company will be “pro-humanity from the standpoint that humanity is just much more interesting than not-humanity”.

“If it tried to understand the true nature of the universe, that’s actually the best thing that I can come up with from an AI safety standpoint,” he added.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

xAI is a completely separate entity from X Corp, formerly known as Twitter, according to Musk.

The company states, however, that it will still be working alongside Musk-owned companies “to make progress towards our mission”.

Musk’s announcement of xAI follows the billionaire’s claims that OpenAI’s ChatGPT has a liberal bias.

The Tesla founder said in an April interview that he was working on a “maximum truth-seeking” AI to rival ChatGPT, claiming he was going to launch “TruthGPT”.

Musk co-founded OpenAI in 2015 but parted ways from the company in 2018 after citing a conflict of interest with Tesla. 

AI safety has been at the forefront of the technology industry since the boom of generative AI, with global regulators rushing to enforce their own set of rules. 

The newly assembled xAI team has “led the development of some of the largest breakthroughs in the field including AlphaStar, AlphaCode, Inception, Minerva, GPT-3.5, and GPT-4,” the company said in a statement. 

The director of the Center for AI Safety, Dan Hendrycks, has been announced as an advisor to the company.

Former engineers from Google and Microsoft have also been confirmed as working on the team.