An international group of AI experts and scientists have come together to release a new set of guidelines for the safe development of AI products. As countries around the globe scramble to release their own forward-thinking framework. 

The World Ethical Data Foundation has released an open letter that includes a checklist of 84 questions. The group believes that if developers answer all of them at the beginning of a project, safety will be ensured.

The checklist includes questions considering if the user is fully aware of how they are interacting with AI. As well as considerations into global data protection laws and the data used to train the model. 

Currently, the global group has over 25,000 AI development experts from a range of tech companies, including Meta and Google.

The open letter has received signatures from hundreds of experts in the field. 

The full list is split into three chapters, which feature questions for the whole team, the testers and the developers. 

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

The World Ethical Data Foundation is similar to other non-profit groups fighting for the safe development of AI. 

The UK wants to be a global leader for AI regulation

AI has been a point of discussion throughout the tech world over the past couple of years – with generative applications like OpenAI’s ChatGPT and Google’s Bard receiving mainstream attention. 

UK Prime Minister Rishi Sunak announced the launch of the country’s AI taskforce last month. Sunak appointed AI investor Ian Hogarth to lead the group, which he claims will help “better understand the risks” associated with the aforementioned systems.

The taskforce of experts follows Sunak’s calls for the UK to be a leader in AI development.

The prime minister claimed that he didn’t want the UK to just be the “intellectual home” of AI, but also the “geographical home of global AI safety regulation”.

Talking at London Tech Week last month, Sunak said: “Already we’ve seen AI help the paralysed to walk and discover superbug-killing antibiotics – and that’s just the beginning.

“The possibilities are extraordinary. But we must – and we will – do it safely.”