Current AI systems are able to learn to cope in particular situations impressively well. Google’s DeepMind AI can outplay a human at chess, and IBM’s Project Debater can match a human in an argument. However, AI still trails the human mind for adaptability.

If the rules of the task were changed, the AI would inevitably fail to adapt. DeepMind wouldn’t win a game of chess where the aim isn’t to capture the king, while Project Debater would be unable to compete in an argument that requires participants to talk in rhymes. However, the Defense Advanced Research Projects Agency, a US governmental research agency responsible for the development of emerging technologies, hopes to change that.

DARPA has announced the Science of Artificial Intelligence and Learning for Open-World Novelty (SAIL-ON) program, through which it will focus on researching and developing AI systems that can adapt their behaviour to respond to changes that occur in the open world.

“Imagine if the rules for chess were changed mid-game,” said Ted Senator, program manager at DARPA’s Defense Sciences Office. “How would an AI system know if the board had become larger, or if the object of the game was no longer to checkmate your opponent’s king but to capture all of his pawns? Or what if rooks could now move like bishops? Would the AI be able to figure out what had changed and be able to adapt to it?”

Through the SAIL-ON program, DARPA hopes to teach an AI system how to respond to change without requiring an entirely new, large dataset.

DARPA is looking for experts across the AI fields, including machine learning, plan recognition, knowledge representative, anomaly detection, fault diagnosis and recovery, probabilistic programming and more.

The agency will be holding a ‘Proposers Day’ on Tuesday, 5 March, in Arlington, Virginia, to brief those interested on the objectives of the SAIL-ON program.

AI warfare: A potential application?

There has been no shortage of self-driving car accidents over the years, which culminated with the first recorded death last year involving one of Uber’s autonomous vehicles. However, research shows that the vast majority of these accidents are a result of unpredictable human behaviour. A recent study by Axios found that of the 24 incidents involving self-driving vehicles operating in autonomous mode between 2014 and 2018, none of them were caused by the self-driving technology.

While self-driving technology continues to improve, this shows that it still struggles to react to the unpredictability of human drivers and pedestrians. The technology that DARPA hopes to develop could potentially provide a solution by training AI systems to respond more appropriately in those split-second moments.

However, DARPA, as a part of the United States Department of Defense, undoubtedly has a more military application in mind.

The agency envisions a system that goes through the military OODA loop process: observe the situation, process what is observed, decide the best response and then respond. This would allow the system to respond to the unpredictable nature of war zones.

“Thanks to massive amounts of data that include rare-event experiences collected from tens of millions of autonomous miles, self-driving technology is coming into its own. But the available data is specific to generally well-defined environments with known rules of the road,” said Senator.

3 Things That Will Change the World Today

“It wouldn’t be practical to try to generate a similar data set of millions of self-driving miles for military ground systems that travel off-road, in hostile environments and constantly face novel conditions with high stakes, let alone for autonomous military systems operating in the air and on sea,”

“The first thing an AI system has to do is recognize the world has changed. The second thing it needs to do is characterize how the world changed. The third thing it needs to do is adapt its response appropriately,” Senator said. “The fourth thing, once it learns to adapt, is for it to update its model of the world.”