What do Facebook’s 10 Year Challenge, Domino’s Points for Pies app and the early detection of diabetic retinopathy all have in common? They prove the difficulty in separating the peril from the promise of AI.

The National Health Service (NHS) is on a mission to put big data and artificial intelligence (AI) to work in safeguarding patient health without endangering the privacy and security of those patients. In support of that objective, the NHS issued a ten point code of ethics defining the behavior it expects from those developing and using data- and AI-driven technologies.

As the custodian of the UK’s largest database of healthcare data, the NHS supports a large number of third party software development projects. It’s new code of ethics endeavors therefore to clearly define a standard of behavior for any vendor wishing to access and make use of the NHS’ patient database.

How the NHS AI code is designed to protect users

Key tenets of the NHS AI code cover the usual suspects, including a clear define the expected outcome of a given app; the use of data according to its intended purpose and the application of open standards.

More interesting, though, is the demand for evidence of the effectiveness of the intended use and value for money. Moreover, the new code calls for the ethical disclosure of how data is being used and by what type of AI algorithm.

The approach taken by the NHS has a head start over other vertical markets and use cases, thanks to earlier patient data privacy efforts like the EU’s General Data Protection Regulation (GDPR).

How well do you really know your competitors?

Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.

Company Profile – free sample

Thank you!

Your download email will arrive shortly

Not ready to buy yet? Download a free sample

We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form

By GlobalData
Visit our Privacy Policy for more information about our services, how we may use, process and share your personal data, including information of your rights in respect of your personal data and how you can unsubscribe from future marketing communications. Our services are intended for corporate subscribers and you warrant that the email address submitted is your corporate email address.

Even so, it stands as an exemplar and a seemingly achievable route to AI accountability for creators, participants and consumers. To protect the user, first protect the user’s data.

The quest for responsible AI

Technology providers have long trumpeted the importance of responsible AI. In 2016, Microsoft’s CEO, Satya Nadella, suggested that developers stop focusing on good versus evil AI and instead concentrate on the “values instilled in the people and institutions creating this technology.”

Ethical AI has come a long way since 2016. Unfortunately, most of that progress has involved a series of unfortunate breaches of public trust, corporate responsibility and personal privacy.

Social media giant Facebook has certainly played a leading role in illustrating the risks and responsibilities that surround  the use of AI, even in supporting the most mundane of tasks.

Was the company’s recent “10 Year Challenge” simply a fun Facebook meme encouraging people to post pictures of themselves both then and now? Or was that challenge a deliberate effort by the vendor to gather facial recognition data for use by an as yet undisclosed Facebook partner? Facebook flatly denies having starting this viral trend or benefiting from the participation of its users.

Sadly, Facebook is not alone here. Google, IBM, and even Microsoft have come under similar scrutiny. Many of these AI providers are actively pushing back against any appearance of AI-induced evil. For instance, in mid-2018, Microsoft revealed that it had turned down a number of potential ecosystem deals that might have led to unethical use of their AI technologies.

More AI ethics gatekeepers needed

These efforts unfortunately do nothing but point a finger at what is in fact the bigger problem. We can only expect so much from those who create AI technologies. We cannot bank on these firms to control their partner ecosystems in the same way Google and Apple attempt to police their mobile app ecosystems, looking out for overt and covert malware.

Even when the measure of “evil” is cut and dried as with malware, it is nearly impossible for a single gatekeeper to stop every barbarian waiting patiently at the gate.

Stated bluntly, we can’t leave ethics to the creators alone. To prevent the misuse of AI, every creator, participant and consumer in a given AI use case would need to enter into an enforceable mutual agreement that outlines the following (as a starting point):

  • Scope of participation: A list of the roles and responsibilities of all participants
  • Disclosure of interests: What does the creator stand to gain; what about the participant?
  • Expectations of confidentiality: How will user data be anonymised; what is the chain of custody for that data?
  • Definition of outcomes: Full disclosure of how an AI-fed decision has been reached.

That goes for a government sponsored program to anonymise patient retina scans in hopes of identifying macular degeneration before symptoms even occur.  And it applies equally to a vendor seeking to gamify food photography as with the recent AI-driven Points for Pies pizza spotting app.

In either case, it’s up to the creator, participants, and consumers to jointly establish a circle of mutual trust that’s specific to the task at hand.

Unfortunately, that’s a pipedream, at least for now. Establishing an agreement that’s ethical, transparent, legal, and enforceable for each and every pizza spotting app is a long, long way off. Fortunately, within privacy-sensitive industries like healthcare, there are signposts appearing that point toward this type of trust.

Those charged with the safekeeping of user data, be they public or private entities, should keep a close eye on the NHS’ effort to ensure the safe and effective use of both big data and AI.