OpenAI has formed a new team to minimise the risks posed by artificial intelligence (AI) as the technology continues to improve.

Called Preparedness, the team will monitor, predict, and try to protect against “catastrophic risks” such as nuclear threats, said ChatGPT creator OpenAI in a blog post.

The team will also work to counter other dangers such as autonomous duplication and adaptation (ARA), cybersecurity, chemical, biological, and radioactive attacks, as well as targeted persuasion.

MIT’s Center for Deployable Machine Learning director Aleksander Madry will lead the Preparedness team.

OpenAI said: “We believe that frontier AI models, which will exceed the capabilities currently present in the most advanced existing models, have the potential to benefit all of humanity. But they also pose increasingly severe risks.”

The Preparedness team will also develop and maintain a risk-informed development policy (RDP).

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The RDP aims to lay out OpenAI’s approach to developing protective measures, conduct thorough evaluations and monitoring of frontier AI model capabilities, and set up a governance framework for supervision and accountability throughout the development process.

OpenAI said: “The RDP is meant to complement and extend our existing risk mitigation work, which contributes to the safety and alignment of new, highly capable systems, both before and after deployment.”

In May 2023, OpenAI CEO Sam Altman, along with other AI experts, warned that the technology could cause humanity’s extinction.

A group of leading CEOs, engineers and researchers have released a 22-word-statement that read: “Mitigating the risk of extinction from artificial intelligence should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”