During testimony before the Senate on Tuesday, Air Force General Paul Selva, the second-highest ranking officer in the U.S. military, expressed his concern about the use of automated weaponry that relies on artificial intelligence to determine when to discharge and what and whom to destroy.
When Senator Gary Peters, D-Mich., asked the general about a Department of Defense directive that keeps humans in control of autonomous weapons systems, Selva warned of “keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don’t know how to control.”
“I don’t think it’s reasonable for us to put robots in charge of whether or not we take a human life,” Selva told the Senate Armed Services Committee during a confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff, during which a wide range of topics were covered, including North Korea, Iran and defense budget issues.
He predicted that “there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” but added that he was “an advocate for keeping that restriction.”
Selva said humans needed to remain in the decision making process “because we take our values to war.” He pointed to the laws of war and the need to consider issues like proportional and discriminate action against an enemy, something he suggested could only be done by a human.
It sounds dire, and the warning from Gen. Selva, coupled with CNN’s headline describing “out-of-control killer robots,” has invited a certain amount of ridicule in the Twittersphere.
Don't say we didn't warn you. 😳 #KillerRobots https://t.co/8W0QFpkJ2n
— IamTHAL (@NoplaceReally) July 19, 2017
Top #USA #Army general saw #Terminator and doesn't like #Skynet either; warns against self-guided #KillerRobots : https://t.co/rqjeoEe2Vl
— Radio Far Side (@RadioFarSide) July 19, 2017
When real life is scarier than a Michael Bay movie. #killerrobots https://t.co/LHQsfONMIB
— Donya Levine (@dlev123) July 19, 2017
Indeed, the alarm over robots brings to mind the old Saturday Night Live commercial parody featuring Sam Waterston offering insurance policies protecting against killer robots.
But, as easy as it is to make fun of what sounds like science fiction paranoia or the ramblings of someone who isn’t quite stable, there are legitimate concerns when it comes to autonomous weapons.
In 2015, group of over 20,000 robotics researchers and other scientists signed an open letter advocating for the prohibition of artificially intelligent (AI) weapons technology, which the letter called “the third revolution in warfare, after gunpowder and nuclear arms.”
This type of weaponry is feasible now, if not possible within years, so what dangers could they cause? After all, we’re talking about weapons like armed, self-guided drones or helicopters programmed only with a set of parameters on whom to kill. These parameters may not always apply properly, and innocent men, women, and children could lose their lives. What if some sort of autonomous weapon malfunctioned at a military base and slaughtered thousands of troops in a tragic form of friendly fire?
And what about our enemies? Who’s to say that some computer nerd who has joined ISIS isn’t already working to develop an artificially intelligent assassin or suicide bomber to inflict mass casualties? That’s a concern on the minds of men like Gen. Selva:
Selva acknowledged the possibility of US adversaries developing such technology, but said the decision not to pursue it for the US military “doesn’t mean that we don’t have to address the development of those kinds of technologies and potentially find their vulnerabilities and exploit those vulnerabilities.”
The entire realm of artificial intelligence is frightening. As long as people continue to play God, there’s always a chance that someone will exploit technology for nefarious purposes. Even without taking weaponized robotics into consideration, we’re trusting in those who develop AI.
The open letter urging the ban on autonomous weaponry closes by stating:
…we believe that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea…
Maybe Gen. Selva’s concerns aren’t so far fetched after all.
Join the conversation as a VIP Member