Artificial intelligence is impacting almost every area of daily life, with applications in sectors such as healthcare, energy consumption, cars safety, farming, climate change, financial risk management and numerous others.
However, there are still many grey areas when it comes to the technology, with new challenges related to the future of work, and legal questions in terms of culpability in situations where machines are allowed to make decisions.
In response, the European Union has released one of the world’s first government-led sets of AI ethics guidelines designed to act as a road-map for organisations looking to implement the technology.
In 2018, the EU appointed a group of independent experts to come up with a set of ethical guidelines for artificial intelligence (AI) development. Today’s announcement is the latest stage of this, in advance of a large-scale pilot beginning this summer.
The EU’s AI Strategy aims to increase investment in AI to at least €20bn annually over the next decade, but with this comes a number of challenges in ensuring this is done ethically and not at the expense of human wellbeing.
As awareness of the technology grows, there is a renewed focus on its ethical implications, with a recent study finding that half of senior business leaders surveyed had some concern about explaining to their customers how AI uses data. However last week, Google announced that an independent ethics board, set up to keep track of Google’s artificial intelligence programmes, had been shut down after less than two weeks. Some people are therefore looking to governments to play a role in monitoring this area.
The Commission, established to encourage cooperation on AI across the EU, has laid out seven key AI ethics guidelines for achieving “trustworthy” AI: human agency and oversight, robustness and safety, privacy and data governance, transparency, diversity, non-discrimination and fairness, and societal and environmental well-being.
These AI ethics guidelines are designed to be the next steps for building trust in artificial intelligence, and will be used as a basis to establish an international consensus for human-centric AI.
Martin Jetter, Senior Vice President & Chairman IBM Europe believes that other countries and regions should follow the EU’s example:
“The EU’s new Ethics Guidelines for Trustworthy AI set a global standard for efforts to advance AI that is ethical and responsible, and IBM is pleased to endorse them. They reflect many principles that IBM has been practicing for a long time. We were proud to participate in the work of the Expert Group that developed the guidelines, and we believe the thoughtful approach to creating them provides a strong example that other countries and regions should follow. We look forward to contributing actively to their implementation.”
AI ethics guidelines: What’s next?
The next phase, due to begin summer 2019, will involve feedback from relevant stakeholders including companies, public administrations and organisations, who the commission is inviting to test the draft guidelines.
Building on this review, the Commission will evaluate the outcome and propose any next steps. Ultimately, the group wants to cooperate with like-minded partners such as Japan, Canada or Singapore.
Dr Iain Brown, Head of Data Science at SAS UK & Ireland believes that embedding ethical practices into the implementation of AI is essential at this stage:
“The EU is absolutely right that there needs to be a single framework governing the way consumer data is treated. An ethical approach to technology is definitely possible – the question is whether the tech giants want to work that way, and to what extent governments can require it of them. The future is being built by artificial intelligence – now’s the time for regulators to look at ways of embedding ethical practices into the way it’s used in the market.”
However, Colin Truran, Principal Technologist Strategist at Quest believes that establishing guidelines of this kind comes with several challenges:
“The current overarching conundrum surrounding AI ethics is really in who decides what is ‘ethical’. AI is developing in a global economy, and there is a high likelihood of data exchange between multiple AI solutions. Without clear testing guidelines or even in most cases the ability to test, we can’t know if a system hasn’t been intentionally corrupted or simply built of a flawed set of principles.”