OpenAI unveils plan to mitigate AI risks

OpenAI has announced it will establish a 'preparedness' team to monitor AI risk.

Sarah Brady December 19 2023

ChatGPT creator, OpenAI, has outlined a strategic approach to address the potential dangers of AI in a blog post yesterday (18 Dec), including the risk of supplying information on how to construct chemical and biological weapons to cyber criminals.

The newly established “preparedness” team will be led by MIT AI professor Aleksander Madry and will comprise AI researchers, computer scientists, national security experts, and policy professionals.

Madry, a seasoned AI researcher leading MIT’s Center for Deployable Machine Learning and co-leading the MIT AI Policy Forum, was among the group of OpenAI leaders who resigned when Altman faced dismissal by the board in November. Mandry returned to the company following Altman’s reinstatement.

The team’s mandate is to monitor evolving technologies, conduct continuous assessments and provide timely warnings in the event AI poses a danger.

The preparedness team will be dedicated to mitigating biases in AI, and incorporate a superalignment team, which explores safeguards against potential future scenarios of AI surpassing human intelligence.

Google and Microsoft have both previously issued warnings regarding the existential threats posed by AI, likening them to the severity of pandemics or nuclear weapons.

In April, Elon Musk, Twitter CEO and OpenAI co-founder, called for a six-month break on the development of AI systems more powerful than GPT-4, warning of a substantial risk to society.

At the UK’s landmark AI Safety Summit in November, Prime Minister Rishi Sunak attempted to assuage fears of AI’s potential dangers, following a UK government report which claimed generative AI could be “used to assemble knowledge on physical attacks by non-state violent actors, including for chemical, biological and radiological weapons.”

The report also warned that AI could also make it harder to trust online content and increase the risk of cyber-attacks by 2025.

Yet a growing faction of AI business leaders argues that concerns are exaggerated and that efforts should focus on leveraging technology for societal improvement and financial gain.

Meta’s president of global affairs and former UK deputy prime minister Nick Clegg compared the discourse surrounding AI to the “moral panic” over video games in the 80s.

OpenAI claims a balanced stance on the potential dangers of AI. Altman has acknowledged the long-term risks associated with AI while emphasising the importance of addressing current issues. He has publicly advocated for regulation to prevent the harmful aspects of AI but cautioned against measures that hinder the competitiveness of smaller companies.

Uncover your next opportunity with expert reports

Steer your business strategy with key data and insights from our latest market research reports and company profiles. Not ready to buy? Start small by downloading a sample report first.

Newsletters by sectors

close

Sign up to the newsletter: In Brief

Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Thank you for subscribing

View all newsletters from across the GlobalData Media network.

close