I've suggested that it's time for a serious debate on drones, and that a good place to start is with Isaac Asimov's "Three Laws of Robotics."
Here are 10 questions that come straight out of the writings of Asimov and that can help spur the debate.
(1) Shouldn't the onus be on companies supplying these robots to abide by rules that assure the public is safe?
"You are perfectly well acquainted, I suppose, with the strict rules against the use of robots on inhabited worlds . . . . You are also aware that all positronic robots are leased, not sold; that the Corporation remains the owner and manager of each robot, and is therefore responsible for the actions of all." ("Evidence," p. 138)
(See Are Corporations Responsible for Drone Killings? .)
(2) What happens to the rules when governments decide they "need" to skirt them?
"Hyper Base happens to be using several robots whose brains are not impressioned with the entire First Law of Robotics.
"Aren't impressioned? . . .
"It was on government order and there was no way of violating the secrecy . . . ." (p. "Little Lost Robot," p. 163)
(3) What happens to the rules when robot-on-robot violence is involved?
"[T]here is one time when a robot may strike a human being without breaking the First Law. Just one time." ("Evidence," p. 160)
(4) What might advanced technological society look like without rules that prohibit robots from doing harm?
"The ultimate machine is an intelligent machine and there is only one basic plot to the intelligent-machine story -- that it is created to serve man, but that it ends by dominating man. It cannot exist without threatening to supplant us, and it must therefore be destroyed or we will be." ("Machine and Robot," p. 440)
(5) What design features can we reasonably require in drones to assure that we are safe?
"Those were the days of the first talking robots when it looked as if the use of robots on Earth would be banned. The makers were fighting that and they built good, healthy slave complexes into the damned machines." ("Runaround," p. 117)
(See Design Drones To Guarantee No Humans Will Be Injured .)
(6) Need we fear the government-industrial robot complex?
"The government was offering the company a fortune, and threatening it with antirobot legislation in case of refusal. We were stuck then, and we're badly stuck now." ("Little Lost Robot," p. 167)
(See DRONES: Need we fear the government-industrial robot complex?)
(7) What do we need to be more afraid of: robots with a compulsion to out-think humans? or humans that are afraid to second-guess the robots?
"I don't like to have Nestor 10 continue to elude us. It's bad. It must be gratifying his swollen sense of superiority. I'm afraid that his motivation is no longer simply one of following orders. It think it's becoming more a matter of sheer neurotic necessity to out-think humans." ("Little Lost Robot," p. 178)
(See Drone to Human: Leave the Thinking to Me.)
(8) What's at stake? What would it look like to put these robots to work in the cause of distributive justice, instead of in the cause of violence?
"There can be no serious conflicts on Earth, in which one group or another can seize more power than it has for what it thinks is its own good despite the harm of Mankind as a whole, while the Machines rule. If popular faith in the Machines can be destroyed to the point where they are abandoned, it will be the law of the jungle again." ("The Evitable Conflict," p. 212)
(9) Isn't the biggest obstacle to debating "The Three Laws," in fact, the way it exposes the danger humans pose?
"If, by virtue of the Second Law, we can demand of any robot unlimited obedience in all respects not involving harm to a human being, then any human being, any human being, has a fearsome power over any robot, any robot. In particular, since Second Law supersedes Third Law, any human being can use the law of obedience to overcome the law of self-protection. He can order any robot to damage itself or even destroy itself for any reason, or for no reason." ("The Bicentennial Man," p. 266)
(10) Are we underestimating the need for the people who have thought the most about this issue to carry this discussion into the public square?
"It was the battle over public opinion that held the key to courts and Legislature and in the end a law passed which set up conditions under which robot-harming orders were forbidden." ("The Bicentennial Man," p. 266)
I hope these questions provide useful material for debate. A closing thought from the story "Galley Slave":
"It is only by being concerned for robots that one can
truly be concerned for twenty-first century man.
You would understand this if you were a roboticist." (p. 391)
Isaac Asimov wrote those words in 1941 . . . .
(Page references are to the 1990 Byron Preiss Visual Publications edition of Robot Visions.)
Related posts
We have taboos against certain practices despite the fact that they are convenient. The taboos reflect a recognition of social disruptiveness that outweighs the benefits of some limited subset of circumstances in which the practices happen -- and, more importantly, even the possible desires of the most powerful people in the society.
(See Why Isn't Robotic Killing Taboo? )
Beyond recognizing the inherent contradictions of "pre-emptive violence," we must confront an urgent problem related to technology: the automation of "pre-emptive violence" -- e.g. via drone technology -- is leading to a spiral (or "loop" or "recursive process") that we may not be able to get out of.
(See When "Pre-emptive Violence" Is Automated ....)
With drones, people become just dots. "Bugs." People who no longer count as people . . . .
(See Drone Victims: Just Dots? Just Dirt? )
No comments:
Post a Comment