Introduction
Autonomous weapons or robots designed to kill humans are not a far-fetched idea in science fiction. Semi-autonomous missile systems that automatically target and destroy hostile ships with minimal human intervention are already in development [1]. Automatic weapons defense systems are already in use by numerous countries [2]. Combined with advanced artificial intelligence, autonomous weapons have the capability to inflict tremendous damage in warfare automatically; hence, it is important to discuss the regulation of such a system before it enters widespread usage. This essay will explore the debate of whether or not the military use of fully autonomous weapons and robots should be banned. It will then discuss the ethical implications of accountability …show more content…
There are situations in which accountability of an autonomous weapon's actions is not clear. According to Docherty [2], the purpose of accountability is twofold: preventing harm to innocent people, and allowing victims to seek justice. If an artificial intelligence makes a mistake, punishing it would not serve the purposes of accountability. However, punishing the programmer, manufacturer, or the commander that deployed it does not make much sense either. Docherty argues that a military commander does not have the control to prevent the autonomous robot from harming innocent people, as the robot is autonomous. In addition, Docherty states that it is unfair to punish either the programmer or the manufacturer, because it is just impossible and unfeasible to fully enumerate all possible decisions an artificial intelligence can arrive at. Thus, there is no party that can be held accountable for the actions of an autonomous weapon in a satisfactory manner, and hence they should not be