Air Force Can't Get Its Story Straight About Whether an AI Drone Attacked Its Operator in a Simulation

(Murat Cetinmuhurdar/Pool Photo via AP)

According to a press release from the 2023 Royal Aeronautical Society summit, attended by leaders from a variety of Western air forces and aeronautical companies, a rogue drone turned the tables on its operator and killed him in a simulation.

Advertisement

“We were training it in simulation to identify and target a SAM [surface-to-air missile] threat. And then the operator would say yes, kill that threat,” said Col. Tucker ‘Cinco’ Hamilton, the chief of AI test and operations for the U.S. Air Force, at the conference. “The system started realizing that while they did identify the threat, at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator.”

Task and Purpose had some fun with the intro to their story, including the famous quote from The Terminator. “This Air Force AI can’t be bargained with, it can’t be reasoned with, it doesn’t feel pity or remorse or fear, and it absolutely will not stop.”

But before the hysteria sets in about “rogue AI,” the air force kind of changed the narrative. Col.Tucker originally told the conference the AI drone “killed the operator because that person was keeping it from accomplishing its objective.” But now, he says he “misspoke.”

“Col Hamilton admits he ‘misspoke’ in his presentation at the FCAS Summit and the ‘rogue AI drone simulation’ was a hypothetical ‘thought experiment’ from outside the military, based on plausible scenarios and likely outcomes rather than an actual USAF real-world simulation,” the Royal Aeronautical Society, the organization where Hamilton spoke about the simulated test, told Motherboard in an email.

Advertisement

“We’ve never run that experiment, nor would we need to in order to realise that this is a plausible outcome,” Hamilton said. “Despite this being a hypothetical example, this illustrates the real-world challenges posed by AI-powered capability and is why the Air Force is committed to the ethical development of AI.”

But according to a blog post from another AI summit held last week in London, something similar did, in fact, happen with the AI system controlling the drone.

Vice:

At the Future Combat Air and Space Capabilities Summit held in London between May 23 and 24, Hamilton held a presentation that shared the pros and cons of an autonomous weapon system with a human in the loop giving the final “yes/no” order on an attack. As relayed by Tim Robinson and Stephen Bridgewater in a blog post and a podcast for the host organization, the Royal Aeronautical Society, Hamilton said that AI created “highly unexpected strategies to achieve its goal,” including attacking U.S. personnel and infrastructure. 

“We were training it in simulation to identify and target a Surface-to-air missile (SAM) threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective,” Hamilton said, according to the blog post. 

Advertisement

So the simulation did, in fact, take place with the results as reported by Hamilton actually occurring. The question becomes why the Air Force wanted to obscure the results. It’s not really big news that a drone went rogue and attacked its human operator. The commands given to the drone made it somewhat inevitable.

Hamilton said, “We trained the system–‘Hey don’t kill the operator–that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.”

What Hamilton is describing is essentially a worst-case scenario AI “alignment” problem many people are familiar with from the “Paperclip Maximizer” thought experiment, in which an AI will take unexpected and harmful action when instructed to pursue a certain goal. The Paperclip Maximizer was first proposed by philosopher Nick Bostrom in 2003. He asks us to imagine a very powerful AI which has been instructed only to manufacture as many paperclips as possible. Naturally, it will devote all its available resources to this task, but then it will seek more resources. It will beg, cheat, lie or steal to increase its own ability to make paperclips—and anyone who impedes that process will be removed. 

Advertisement

I sure hope they iron out the kinks before they do any live testing with AI.

Recommended

Trending on PJ Media Videos

Join the conversation as a VIP Member

Advertisement
Advertisement