The UK’s Information Commissioner’s Office (ICO) has published guidance on how to ensure data protection compliance when deploying artificial intelligence (AI).
The ICO opened consultation on its first draft of the AI guidance in December last year, and released the final version today, the culmination of two years of research and part of the ICO’s commitment to enable good data protection practice in AI.
The guidance is intended to “mitigate the risks specifically arising from a data protection perspective, explaining how data protection principles apply to AI projects without losing sight of the benefits such projects can deliver”.
It includes both recommendations on best practice and practical guidelines on deploying the technology while avoiding the security risks potential for discrimination and bias that it can bring.
The guidance is designed to help organisations “assess the risks to rights and freedoms that AI can pose from a data protection perspective” and how to implement measures to mitigate risks, but is not intended as a guide to the ethical or design principles of the use of AI. It also covers accountability and governance, data minimisation and security and compliance.
The guidance covers the “fair, lawful and transparent processing” of personal data, which it recognises is “challenging in an AI context”.
It acknowledges the fact that AI often involves the personal data being “managed and processed in unusual ways” , making it difficult to apply data protection principles.
However, while ICO highlights that organisations will likely have to “consider a range of competing considerations and interests” when designing AI systems, when it comes to systems processing personal data, organisations must comply with data protection principles and cannot ‘trade’ this requirement away.
The ICO has said that it will continue to adapt the guidelines to keep pace with the “fast moving innovation and evolution” of AI.
“AI can make every aspect of privacy management a more complicated matter”
Commenting on the development, Jo Joyce, senior associate in the commercial technology & data team at law firm Taylor Wessing said:
“The ICO is known for being one of, if not the best, of Europe’s data protection Supervisory Authorities, when it comes to the production of useful guidance on complicated topics. With its new AI guidance materials the ICO has lived up to its reputation by offering practical support to organisations working with AI applications and clear insight into its own expectations and criteria for judging compliance.
“The use of AI can make every aspect of privacy management a more complicated matter; it is often more difficult to work out who is in control of AI data and large data sets, essential for AI training, present particular issues for data minimisation. Despite these challenges, the use of AI continues to grow and expand in new and exciting ways, this creates a challenge for the ICO itself: how to communicate with the developers of AI in a way that provides meaningful support to demonstrate compliance with the core GDPR principles of fairness, lawfulness and transparency, despite the complexity of their chosen technology.
“The ICO’s decision to pitch its AI support in two directions, at privacy specialists and AI engineers, means that this new guidance has a lot of work to do, it covers the particular challenges that AI presents for privacy professionals trying to risk assess a rapidly changing technology and reinforces the essential nature of the privacy by design principle for those building or growing AI.
“Crucially, the ICO recognises that this guidance Is just the start and further work on tricky issues such as cloud based AI processing is expected for 2021. Even if it is just the start, the new AI guidance is timely, more than ever the public are aware of the serious and often novel security risks associated with AI and they grow increasingly concerned about inbuilt bias and discriminatory outcomes arising from the use of poorly trained AI applications. Adherence to the ICO’s guidance will help AI developers to take an important step towards greater public understanding of the possibilities of AI and confidence in its benefits.”