Robot ethics is a branch of applied ethics which endeavours to isolate and analyse ethical issues arising in connection with present and prospective uses of robots. These issues span human autonomy protection and promotion, moral responsibility and liability, privacy, fair access to technological resources, social and cultural discrimination, in addition to the ethical dimensions of personhood.
Robots are machines endowed with sensing, information processing, and motor abilities. Information processing in robotic systems takes notably the form of perception, reasoning, planning, and learning, in addition to feedback signal processing and control. The coordinated exercise of these abilities enables robotic systems to achieve goal-oriented and adaptive behaviours. Communication technologies enable robots to access networks of software agents hosted by other robotic and computer systems. New generations of robots are becoming increasingly proficient in coordinating their behaviours and pursuing shared goals with heterogeneous teams of agents which include other robots, humans, and software systems.
During the last decades of the last century, robots were mostly confined to industrial environments, and rigid protocols severely limited human-robot interaction there. The rapidly growing research areas of field and service robotics2are now paving the way to more extensive and versatile uses of robots in non-industrial environments, which range from the extreme scenarios of space missions, deep sea explorations, and rescue operations to the more conventional human habitats of workshops, homes, offices, hospitals, museums, and schools. In particular, research in a special area of service robotics called personal robotics is expected to enable richer and more flexible forms of human-robot interaction in the near future, bringing robots closer to humans in a variety of healthcare, training, education, and entertainment contexts.
Robot ethics is a branch of applied ethics which endeavours to isolate and analyse ethical issues arising in connection with present and prospective uses of robots. The following questions vividly illustrate the range of issues falling in the purview of robot ethics: * Who is responsible for damages caused by service and personal robots? * Are there ethical constraints on the design of control hierarchies for mixed human-robot cooperative teams? * Is the right to privacy threatened by personal robots accessing the internet? * Can military robots be granted the licence to kill in the battlefield? * Should one regard robots, just like human beings, as moral agents and bearers of fundamental rights?
WHAT IS A ROBOT?
To understand the concept of robot ethics, first of all, we need to understand what actually robot is. Given society’s long fascination with robotics, it seems hardly worth asking the question, as the answer surely must be obvious. On the contrary, there is still a lack of consensus among roboticists on how they define the object of their craft. For instance, an intuitive definition could be that a robot is merely a computer with sensors and actuators that allow it to interact with the external world; however, any computer that is connected to a printer or can eject a CD might qualify as a robot under that definition, yet few roboticists would defend that implication.
Certainly, artificial intelligence by itself can raise interesting issues, such as whether we ought to have humans in the loop more in critical systems, e.g., those controlling energy grids and making financial trades, lest we risk widespread blackouts and stock-market crashes. But robots or embodied AI that can directly exert influence on the world seem to pose additional or special risks and ethical quandaries we want to distinguish here. A plausible definition, therefore, needs to be more precise and distinguish robots from mere computers and...