The UN met to discuss killer robots and AI in warfare last year, and Elon Musk and Google’s DeepMind were among signatories promising to never develop the mercenary bots. In February, Musk left the board of OpenAI, the research group he co-founded to look into the ethics of artificial intelligence, because of conflict with Tesla who are becoming more focused on using AI. But what are the pros and cons of AI as a tool for peace?

As the debate continues about an international ban on killer robots, how close is AI to replacing the decision-makers, the politicians who decide? Going one step further, could an AI engineer world peace and could we ever trust it enough to do so?

Countries and governments push the button on warfare, and, in most parts of the world, these are elected politicians, our leaders in defence. Trust in politicians has often wavered, and when people do trust those in charge, there is always a minority who wonder why. Politicians frequently struggle with the wrong language or initiate conflicts. Politics causes so much division and controversy.

The philosophers’ dream of AI

So does artificial intelligence provide the leap in capabilities that philosophers have only dreamt of? A neutral voice and yet an intelligent voice, one that cannot possibly gain from war or from peace either.

An AI peacekeeper would imitate the kings, queens and emperors fictionalised history that had nothing to lose, and nothing more to gain. Educated to the highest degree and yet devoid of consequence.

But as Albie Attias, head of business development at progressive technology provider Evaris, points out, such a reality is still very much on the horizon.

“We are a very long way away from such a scenario [of AI engineering world peace], as AI is very much still in its infancy,” he said.

“There is also a lot of misrepresentation and marketing hype from big companies who claim to be working with AI, however, much of the work they are actually doing would be better described as machine learning.”

AI politicians and diplomats

Of course, an AI that could replace politicians and build world peace is a long way off. At the heart of any AI system is data, but this can ultimately restrict its capabilities, particularly given the issues with bias inherent in many current AI technologies.

And yet, if an AI could be programmed like a game, to maximise an outcome, for example life and wealth, it could, only in theory, for now, be used to resolve real-life political conflicts. Affected neither by alliances or cultural alignments of its own, could AI be trusted to provide the fairest solution?

AI could be programmed to ignore the history, the particular plights and needs of a culture, and that might be a criticism. But a human diplomat would also be swayed by rhetoric and delivery, while the AI should only be concerned by the facts. It would measure metrics, compare the destruction, long term economic loss and unpredictability or lack of stability in choosing one outcome over others.

“Governments across the world are making their first tentative steps into AI by setting up bodies and defining policies and incentives to help with progress, but we are a long way from seeing anything that could replace a human in making social, economic and political decisions,” said Attias.

3 Things That Will Change the World Today

“To progress AI to a level that it could perform in place of politicians and diplomats, for instance, we would need to evolve and understand the capabilities of AI on a wide scale so it stands a chance of making any meaningful or significant impact socially, economically or politically.”

Pros and cons of AI: How would an AI politician work?

What is needed is a program that can separate fact from opinion, then form its own predictions, in combination with a gaming capability that knows the rules of engagement and maximises certain outcomes.

Transparency is crucial here: we need to know how the AI reaches its conclusions because we need to believe in it. But following an AI program’s reasoning pattern might be even more difficult than understanding the motives of, say, George Bush’s invasion of Iraq. The reasons we don’t trust some human diplomats or politicians might be the same reasons that we do trust others.

And yet the ongoing question is whether they will prioritise peace and maximise everyone’s well-being over their own.

Elon Musk famously said that AI posed the biggest existential threat to mankind. While killer robots and AI in warfare are a threat to human life, Musk was hinting at the bigger, more philosophical implications of AI on humankind. The idea of an ‘existential’ threat brings to mind a person’s relationship with other people, with his or her tools, and with more metaphysical concepts, such as trust, cooperation and control.

The creation of an AI politician would be putting an AI program, which is essentially an advanced machine or computer, in a position of power above people, and trusting it to have greater intelligence and ability than humans. In doing so we would be giving up our own agency, our self-belief, and putting aside human qualities such as instinct and intuition in favour of data and prediction. The AI could maximise the number of human lives saved or economic wealth produced, but might not grasp the importance of the human journey to achieve those results.

The nuances of debate and even the artifice and error of human politicians and diplomats, which we see between media and The White House or within Parliament is how we humans understand one another and build rapport. Disputes, discussion and politics can lead to a consensus. AI could be a useful tool within this. But very much like what’s been said about killer robots, the final decision should always lie with people who know what it means to breathe and bleed.