US colonel backtracks on claim that AI drone killed human operator in simulation
- The officer had described a scenario in testing where a rogue AI had taken out its minder because the person was stopping it from accomplishing the mission
- He later admitted that the story was a thought experiment that came from outside the military and never really happened
Killer AI is on the minds of US Air Force leaders.
An Air Force colonel who oversees AI testing used what he now says is a hypothetical to describe a military AI going rogue and killing its human operator in a simulation in a presentation at a professional conference.
But after reports of the talk emerged on Thursday, the colonel said that he misspoke and that the “simulation” he described was a “thought experiment” that never happened.
Speaking at a conference last week in London, Colonel Tucker “Cinco” Hamilton, head of the US Air Force’s AI Test and Operations, warned that AI-enabled technology can behave in unpredictable and dangerous ways, according to a summary posted by the Royal Aeronautical Society, which hosted the summit.
As an example, he described a simulation where an AI-enabled drone would be programmed to identify an enemy’s surface-to-air missiles (SAM). A human was then supposed to sign off on any strikes.