ARTICLE 1, PAGE 1
“I, Robot – I, Criminal”—When Science Fiction Becomes Reality: Legal Liability of AI Robots committing Criminal Offenses
Can society impose criminal liability upon robots? The technological world has changed rapidly. Simple human activities are being replaced by robots. As long as humanity used robots as mere tools, there was no real difference between robots and screwdrivers, cars or telephones. When robots became sophisticated, we used to say that robots “think” for us. The problem began when robots evolved from “thinking” machines into thinking machines (without quotation marks)—or Artificial Intelligence Robots (AI Robots). Could they become dangerous? Unfortunately, they already are. In 1950, Isaac Asimov set down three fundamental laws of robotics in his science fiction masterpiece “I, Robot”: (1) a robot may not injure a human being or, through inaction, allow a human being to come to harm; (2) a robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law; and (3) a robot must protect its own existence, as long as such protection does not conflict with the First or Second Laws.1 These three fundamental laws are obviously contradictory.2 What if a man orders a robot to hurt another person for the own good of the other person? What if the *
Associate Professor, Faculty of Law, Ono Academic College.
ISSAC ASIMOV, I, ROBOT (1950).
Isaac Asimov wrote in his introduction to THE REST OF ROBOTS (1964) that “[t]here was just enough ambiguity in the Three Laws to provide the conflicts and uncertainties required for new stories, and, to my great relief, it seemed always to be possible to think up a new angle out of the 61 words of the Three Laws.”
SYRACUSE SCIENCE & TECHNOLOGY LAW REPORTER
robot is in police service and the commander of the mission orders it to arrest a suspect and the suspect resists arrest? Or what if the robot is in medical service and is ordered to perform a surgical procedure on a patient, the patient objects, but the medical doctor insists that the procedure is for the patient’s own good, and repeats the order to the robot? The main question in that context is which kind of laws or ethics are correct and who is to decide. In order to cope with these same problems as they relate to humans, society devised criminal law. Criminal law embodies the most powerful legal social control in modern civilization. People’s fear of AI robots, in most cases, is based on the fact that AI robots are not considered to be subject to the law, specifically to criminal law. In the past, people were similarly fearful of corporations and their power to commit a spectrum of crimes, but since corporations are legal entities subject to criminal and corporate law, that kind of fear has been reduced significantly.3
The apprehension that AI robots evoke may have arisen due to Hollywood’s depiction of AI robots in numerous films, such as “2001: A Space Odyssey,”4 and the modern trilogy “The Matrix,”5 in which AI robots are not subject to the law. However, it should be noted that Hollywood did treat AI robots in an empathic way by depicting them as human, as almost
See generally John C. Coffee, Jr., “No Soul to Damn: No Body to Kick”: An Unscandalised Inquiry Into the Problem of Corporate Punishment, 79 MICH. L. REV. 386 (1981); STEVEN BOX, POWER, CRIME AND MYSTIFICATION 16-79 (1983); Brent Fisse & John Braithwaite, The Allocation of Responsibility for Corporate Crime: Individualism, Collectivism and Accountability, 11 SYDNEY L. REV. 468 (1988).
STANLEY KUBRICK, 2001: A SPACE ODYSSEY (1968).
JOEL SILVER, THE MATRIX (1999); JOEL SILVER, LAURENCE WACHOWSKI AND ANDREW PAUL WACHOWSKI, THE MATRIX RELOADED (2003); JOEL SILVER, LAURENCE WACHOWSKI AND ANDREW PAUL WACHOWSKI, THE MATRIX REVOLUTIONS...