The Fourth Law of Robotics

Only available on StudyMode
  • Download(s) : 38
  • Published : October 8, 1999
Open Document
Text Preview
Sam Vaknin's Psychology, Philosophy, Economics and Foreign Affairs Web Sites
Sigmund Freud said that we have an uncanny reaction to the inanimate. This is probably because we know that - despite pretensions and layers of philosophizing - we are nothing but recursive, self aware, introspective, conscious machines. Special machines, no doubt, but machines althesame.

The series of James bond movies constitutes a decades-spanning gallery of human paranoia. Villains change: communists, neo-nazis, media moguls. But one kind of villain is a fixture in this psychodrama, in this parade of human phobias: the machine. James Bond always finds himself confronted with hideous, vicious, malicious machines and automata.

It was precisely to counter this wave of unease, even terror, irrational but all-pervasive, that Isaac Asimov, the late Sci-fi writer (and scientist) invented the Three Laws of Robotics:
A robot may not injure a human being or, through inaction, allow a human being to come to harm A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws

Many have noticed the lack of consistency and the virtual inapplicability of these laws put together. First, they are not the derivative of any coherent worldview and background. To be properly implemented and to avoid a potentially dangerous interpretation of them - the robots in which they are embedded must be also equipped with a reasonably full model of the physical and of the human spheres of existence. Devoid of such a context, these laws soon lead to intractable paradoxes (experiences as a nervous breakdown by one of Asimov's robots). Conflicts are ruinous in automata based on recursive functions (Turing machines) as all robots must be. Godel pointed at one such self destructive paradox in the "Principia Mathematica" ostensibly comprehensive and self consistent logical system. It was enough to discredit the whole magnificent edifice constructed by Russel and Whitehead over a decade.

Some will argue against this and say that robots need not be automata in the classical, Church-Turing, sense. That they could act according to heuristic, probabilistic rules of decision making. There are many other types of functions (non-recursive) that can be incorporated in a robot. True, but then, how can one guarantee full predictability of behaviour? How can one be certain that the robots will fully and always implement the three laws? Only recursive systems are predictable in principle (their complexity makes even this sometimes not feasible).

This article will deal with some commonsense, basic problems immediately discernible upon close inspection of the Laws. The next article in this series will analyse the Laws from a few vantage points: philosophy, artificial intelligence and some systems theories.

An immediate question springs to mind : HOW will a robot identify a human being? Surely, in an age of perfect androids, constructed of organic materials, no superficial, outer scanning will suffice. Structure and composition will not be sufficient factors of differentiation. There are two possibilities to settle this very practical issue: one is to endow the robot with the ability to conduct a Converse Turing Test, the other is to somehow "barcode" all the robots by implanting some signalling device inside them. Both present additional difficulties.

In the second case, the robot will never be able to positively identify a human being. He will surely identify robots. This is ignoring, for discussion's sake, defects in manufacturing or loss of the implanted identification tag - if the robot will get rid of the tag, presumably this will fall under the "defect in...
tracking img