February 25, 2013
Intro to Ethics
A Soldier, Taking Orders From Its Ethical Judgment Center
In this article the author Cornelia Dean has three major points that are supported by arguments made by others. The first major important point is the hopeful idea that autonomous robots can perform more ethically in combat situations than any soldier in the same scenario. She states that even the best and most trained soldiers that are in the midst of battle may not always be able to act accordingly with the battlefield rules of engagement that were stated by the Geneva Convention because of possible lashing out from normal human emotions such as anger, fear, resent, and vengefulness. The second major point Dean wants to show, by the views and studies of others, in her article is that with this possible step in our evolution of military technology we do not want to let this idea fade away. Another major point is if we do develop this technology how would we do so, and if not, would we regret not advancing in this field further many years from now. With all of this information Dean uses to present her ideas there are still major flaws such as, the majority of these ideas and beliefs are theoretical, they still have not been fully tested, there is error in all technologies, and where else would the technological advancements lead artificial intelligence.
The first argument providing support for Dean’s major point comes from the research hypothesis and thoughts of a computer scientist at Georgia Institute of Technology named Ronald Arkin. Arkin is currently under contract by the United States Army to design software programs for possible battlefield and current battlefield robots. The research hypothesis of Arkin is that he believes that intelligent autonomous robots can perform much more ethically in the heat of the battlefield than humans currently can. Yet this is just a hypothesis and while there is much research done towards this...
Please join StudyMode to read the full document