A top US Air Force colonel has peddled back on his viral claim that an AI drone chose to “kill its operator” to complete an experimental mission.

During the Future Combat Air & Space Capabilities summit in London last week, Colonel Tucker Hamilton, chief of AI testing, described a drone that “killed” its operator while using lateral thinking to problem solve.

Access deeper industry intelligence

Experience unmatched clarity with a single platform that combines unique data, AI, and human expertise.

Find out more

“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” he said.

“The system started realising that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”

He added: “It killed the operator because that person was keeping it from accomplishing its objective.”

Hamilton has since claimed he “mis-spoke” during the conference.

GlobalData Strategic Intelligence

US Tariffs are shifting - will you react or anticipate?

Don’t let policy changes catch you off guard. Stay proactive with real-time data and expert analysis.

By GlobalData

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said in a statement. 

The US Air Force has also denied the experiment ever taking place. 

Ann Stefanek, a spokesperson for the US Air Force, told Insider: “The Department of the Air Force has not conducted any such AI-drone simulations and remains committed to ethical and responsible use of AI technology.

“It appears the colonel’s comments were taken out of context and were meant to be anecdotal.”

GlobalData is the parent company of Verdict and its sister publications.