The Information Commissioner’s Office (ICO) has opened consultation on its first draft of artificial intelligence (AI) guidance, urging organisations deploying the technology to be transparent and accountable.

In a blog post, the organisation, in partnership with the Alan Turing Institute, explains that the complex nature of AI means that it is often implemented without those affected properly understanding how it works, and says that “the decisions made using AI need to be properly understood by the people they impact” in order to avoid creating “doubt, uncertainty and mistrust”.

The non-departmental public body has set out four key principles for the development of AI decision making systems. Organisations should be transparent, ensuring that AI decision making is explained to the individuals affected; accountable by having “appropriate oversight” of AI systems; take context into account and consider the ethical impacts and purpose of AI projects.

The ICO will now consult on this draft guidance until 24 January 2020, before the final version is published next year.

ICO AI: “Just because a regulator wishes something to be done does not mean that it will be”

Guidelines on cutting edge technology such as AI are much-needed. However, John Buyers, Head of AI and machine learning at Osborne Clarke LLP, believes that ensuring AI is transparent is a difficult task:

“The ICO’s recently issued draft AI guidance, particularly on transparency, have in parts the flavour of King Canute’s missives to the sea. Just because a regulator wishes something to be done does not mean that it will be – particularly if the entreaty runs fundamentally counter to the structural basis of the technology itself, as is the case with deep machine learning which, due to its extreme complexity, is inherently opaque.

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

“In truth there are wider issues to consider here. The first of which is that “accountability” and “transparency” are concepts which need to be externally benchmarked and standardised and also accepted by society on a wider basis. Paradoxically we are currently in a position where we seem to be demanding higher decision-making standards from our machines than from ourselves. Until we set those external and universal standards we are not going to provide industry with any clarity whatsoever in the use of AI.”

He calls for “strong sector-based AI regulations”:

“Secondly, although we understand and recognise the need for the ICO to provide guidance on AI systems which use personal data, these are only a subset of the wider applications of this technology using data which are not personal. To focus on this type of AI at the expense of the others, purely on the basis of personal data, creates a completely arbitrary regulatory distinction.

“Take a concrete example: the ICO is proposing increased regulatory oversight of automated credit scoring systems (which use personal data), but not for the AI systems at the heart of driverless cars (which don’t).  Does it really make sense that we have close regulatory scrutiny of a system whose worst case scenario is an unfair refusal of credit (which would be re-considered on request) but none at all for the AI which guides an autonomous vehicle, where failure could conceivably kill and maim?

“The ICO’s valiant efforts in the area of AI demonstrates that there is a regulatory vacuum which needs to be filled. What we need are strong sector-based AI regulations and regulators to enforce them, each working in close partnership with Elizabeth Denham’s office, but each addressing the unique risks that distinct AI use cases can create.”


Read More: AI Christmas ahead: How artificial intelligence is quietly shaping our festive plans