13-09-2012, 03:03 PM
Autonomous Military Robotics: Risk, Ethics, and Design
ganesh project.doc (Size: 7.72 MB / Downloads: 40)
INTRODUCTION
Imagine the face of warfare with autonomous robotics: Instead of our soldiers returning home in flag-draped caskets to heartbroken families, autonomous robots—mobile machines that can make decisions, such as to fire upon a target, without human intervention—can replace the human soldier in an increasing range of dangerous missions: from tunneling through dark caves in search of terrorists, to securing urban streets rife with sniper fire, to patrolling the skies and waterways where there is little cover from attacks, to clearing roads and seas of improvised explosive devices (IEDs), to surveying damage from biochemical weapons, to guarding borders and buildings, to controlling potentially-hostile crowds, and even as the infantry frontlines.
These robots would be ‘smart’ enough to make decisions that only humans now can; and as conflicts increase in tempo and require much quicker information processing and responses, robots have a distinct advantage over the limited and fallible cognitive capabilities that we Homo sapiens have. Not only would robots expand the battlespace over difficult, larger areas of terrain, but they also represent a significant force-multiplier—each effectively doing the work of many human soldiers, while immune to sleep deprivation, fatigue, low morale, perceptual and communication challenges in the ‘fog of war’, and other performance-hindering conditions.
But the presumptive case for deploying robots on the battlefield is more than about saving human lives or superior efficiency and effectiveness, though saving lives and clearheaded action during frenetic conflicts are significant issues. Robots, further, would be unaffected by the emotions, adrenaline, and stress that cause soldiers to overreact or deliberately overstep the Rules of Engagement and commit atrocities, that is to say, war crimes. We would no longer read (as many) news reports about our own soldiers brutalizing enemy combatants or foreign civilians to avenge the deaths of their brothers in arms—unlawful actions that carry a significant political cost. Indeed, robots may act as objective, unblinking observers on the battlefield, reporting any unethical behavior back to command; their mere presence as such would discourage all-too-human atrocities in the first
place.
Opening Remarks
First, in this investigation, we are not concerned with the question of whether it is even technically possible to make a perfectly-ethical robot, i.e., one that makes the ‘right’ decision in every case or even most cases. Following Arkin, we agree that an ethically-infallible machine ought not to be the goal now (if it is even possible); rather, our goal should be more practical and immediate: to design a machine that performs better than humans do on the battlefield, particularly with respect to reducing unlawful behavior or war crimes [Arkin, 2007]. Considering the number of incidences of unlawful behavior—and by ‘unlawful’ we mean a violation of the various Laws of War (LOW) or Rules of Engagement (ROE), which we also will discuss later in more detail—this appears to be a low standard to satisfy, though a profoundly important hurdle to clear. To that end, scientists and engineers need not first solve the daunting task of creating a truly ‘ethical’ robot, at least in the foreseeable future; rather, it seems that they only need to program a robot to act in compliance with the LOW and ROE (though this may not be as straightforward and simply as it first appears) or act ethically in the specific situations in which the robot is to be deployed.
Market Forces and Considerations
Several industry trends and recent developments—including high-profile failures of semi- autonomous systems, as perhaps a harbinger of challenges with more advanced systems—highlight the need for a technology risk assessment, as well as a broader study of other ethical and social issues related to the field. In the following, we will briefly discuss seven primary market forces that are driving the development of military robotics as well as the need for a guiding ethics; these roughly map to what have been called ‘push’ (technology) and ‘pull’ (social and cultural) factors [US Department of Defense, 2007, p.44].
Compelling military utility.
US defense organizations are attracted to the use of robots for a range of benefits, some of which we have mentioned above. A primary reason is to replace us less-durable humans in “dull, dirty, and dangerous” jobs [US Department of Defense, 2007, p.19]. This includes: extended reconnaissance missions, which stretch the limits of human endurance to its breaking point; environmental sampling after a nuclear or biochemical attack, which had previously led to deaths and long-term effects on the surveying teams; and neutralizing IEDs, which have caused over 40% of US casualties in Iraq since 2003 [Iraq Coalition
Casualty Count, 2008]. While official statistics are difficult to locate, news organizations report
Military Robotics
The field of robotics has changed dramatically during the past 30 years. While the first programmable articulated arms for industrial automation were developed by George Devol and made into commercial products by Joseph Engleberger in the 1960s and 1970s, mobile robots with various degrees of autonomy did not receive much attention until the 1970s and 1980s. The first true mobile robots arguably were Elmer and Elsie, the electromechanical ‘tortoises’ made by W. Grey Walter, a physiologist, in 1950 [Walter, 1950]. These remarkable little wheeled machines had many of the features of contemporary robots: sensors (photocells for seeking light and bumpers for obstacle detection), a motor drive and built-in behaviors that enabled them to seek (or avoid) light, wander, avoid obstacles and recharge their batteries. Their architecture was basically reactive, in that a stimulus directly produced a response without any ‘thinking.’ That development first appeared in Shakey, a robot constructed at Stanford Research Laboratories in 1969 [Fikes and Nilsson, 1971]. In this machine, the sensors were not directly coupled to the drive motors but provided inputs to a ‘thinking’ layer known as the Stanford Research Institute Problem Solver (STRIPS), one of the earliest applications of artificial intelligence. The architecture was known as
‘sense-plan-act’ or ‘sense-think-act’ [Arkin, 1998].
Since those early developments, there have been major strides in mobile robots—made possible by new materials, faster, smaller and cheaper computers (Moore’s law) and major advances in software. At present, robots move on land, in the water, in the air, and in space. Terrestrial mobility uses legs, treads, and wheels as well as snake-like locomotion and hopping. Flying robots make use of propellers, jet engines, and wings. Underwater robots may resemble submarines, fish, eels, or even lobsters. Some vehicles capable of moving in more than one medium or terrain have been built. Service robots, designed for such applications as vacuum cleaning, floor washing and lawn mowing, have been sold in large quantities in recent years. Humanoid robots, long considered only in science fiction novels, are now manufactured in various sizes and with various degrees of sophistication [Bekey, 2005]. Small toy humanoids, such as the WowWee Corporation’s RoboSapien, have been sold in quantities of millions. More complex humanoids, such as the Honda ASIMO are able to perform numerous tasks. However, ‘killer applications’ for humanoid robots have not yet emerged.