01-20-2019 01:01:48 PM -0800
01-20-2019 10:48:50 AM -0800
01-20-2019 07:24:01 AM -0800
01-19-2019 04:27:50 PM -0800
01-19-2019 11:09:10 AM -0800
It looks like you've previously blocked notifications. If you'd like to receive them, please update your browser permissions.
Desktop Notifications are  | 
Get instant alerts on your desktop.
Turn on desktop notifications?
Remind me later.
PJ Media encourages you to read our updated PRIVACY POLICY and COOKIE POLICY.
X


Is Gen. Selva Right About the Dangers of Killer Robots Unleashed on Humanity?

During testimony before the Senate on Tuesday, Air Force General Paul Selva, the second-highest ranking officer in the U.S. military, expressed his concern about the use of automated weaponry that relies on artificial intelligence to determine when to discharge and what and whom to destroy.

When Senator Gary Peters, D-Mich., asked the general about a Department of Defense directive that keeps humans in control of autonomous weapons systems, Selva warned of "keeping the ethical rules of war in place lest we unleash on humanity a set of robots that we don't know how to control."

As CNN reported:

"I don't think it's reasonable for us to put robots in charge of whether or not we take a human life," Selva told the Senate Armed Services Committee during a confirmation hearing for his reappointment as the vice chairman of the Joint Chiefs of Staff, during which a wide range of topics were covered, including North Korea, Iran and defense budget issues.

He predicted that "there will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action," but added that he was "an advocate for keeping that restriction."

Selva said humans needed to remain in the decision making process "because we take our values to war." He pointed to the laws of war and the need to consider issues like proportional and discriminate action against an enemy, something he suggested could only be done by a human.

It sounds dire, and the warning from Gen. Selva, coupled with CNN's headline describing "out-of-control killer robots," has invited a certain amount of ridicule in the Twittersphere.

Indeed, the alarm over robots brings to mind the old Saturday Night Live commercial parody featuring Sam Waterston offering insurance policies protecting against killer robots.

But, as easy as it is to make fun of what sounds like science fiction paranoia or the ramblings of someone who isn't quite stable, there are legitimate concerns when it comes to autonomous weapons.

In 2015, group of over 20,000 robotics researchers and other scientists signed an open letter advocating for the prohibition of artificially intelligent (AI) weapons technology, which the letter called "the third revolution in warfare, after gunpowder and nuclear arms."

This type of weaponry is feasible now, if not possible within years, so what dangers could they cause? After all, we're talking about weapons like armed, self-guided drones or helicopters programmed only with a set of parameters on whom to kill. These parameters may not always apply properly, and innocent men, women, and children could lose their lives. What if some sort of autonomous weapon malfunctioned at a military base and slaughtered thousands of troops in a tragic form of friendly fire?