Friday, July 19, 2013

Design Drones To Guarantee No Humans Will Be Injured

[Part of the series: 10 Questions to Spur the Drone Debate]

"Those were the days of the first talking robots when it looked as if the use of robots on Earth would be banned. The makers were fighting that and they built good, healthy slave complexes into the damned machines." (Isaac Asimov, "Runaround," p. 117)

What design features can we reasonably require in drones to assure that we are safe?

I've proposed that this issue be debated, and that the robot literature created by Isaac Asimov has already posed numerous big question that can guide the debate.

This week, an unmanned fighter jet used as a target crashed in Florida. (It was thus a "drone," though not the same type being used in U.S. military and CIA operations in the Mideast.)

I was struck by this statement in the news coverage: "An explosive device in the plane destroys it if it becomes uncontrollable," according to an Air Force fact sheet. (See Air Force drone crash closes remote Florida highway.)

In other words, steps were taken to render the target drone "safe."  Those steps may have been inadequate, but the important thing is that a clear principle has been expressed. Design features have been put in place. When the drone becomes unsafe, it is sacrificed.

What design features would need to be in place in order to make drones really safe? What design features would need to be in place in order to assure that drones obey the 1st Law of Robotics ("A robot may not injure a human being or, through inaction, allow a human being to come to harm") ?

It goes without saying that drones won't be safe until they stop firing missiles and killing people. There will certainly be regulations in the very near future prohibiting people from putting weapons on drones.

But let's ask the deeper question. We all know that regulations on human behavior are not the ultimate answer, because people break rules. Shouldn't the drones, themselves, play a role in making sure that they are not abused? Why can't drones be designed to know whether they, themselves, have been armed with weapons, and, if armed with weapons, cease functioning (or self-destruct)? Why can't we have an Air Force fact sheet that says, "An explosive device in the drone destroys it if it detects the presence of weapons" ?

The answer, of course, is that drones can be designed that way. All that's missing is the will to do so.

Let's start debating the drones now. (Here are nine more questions to guide the debate.)

(Page references are to the 1990 Byron Preiss Visual Publications edition of Robot Visions.)