Freitag, 31. Juli 2009

Ethics in a Hybrid


Analogous to missile guidance systems that need the use of radar and a radio or a wired link between the control point and the missile, Arkin’s “ethical controller” is a software architecture that provides, “ethical control and a reasoning system potentially suitable for constraining lethal actions in an autonomous robotic system so that they fall within the bounds prescribed by the Geneva Conventions, the Laws of War, and the Rules of Engagement.”

Rather than guiding a missile to its intended target, Arkin’s robotic guidance system is being designed to reduce the need for humans in harm's way, "… appropriately designed military robots will be better able to avoid civilian casualties than existing human war fighters and might therefore make future wars more ethical."
As reported in a recent New York Times article, Dr. Arkin describes some of the potential benefits of autonomous fighting robots. They can be designed without a sense of self-preservation and, as a result, “no tendency to lash out in fear.” They can be built without anger or recklessness and they can be made invulnerable to what he calls “the psychological problem of ‘scenario fulfillment,’ ” that causes people to absorb new information more easily if it matches their pre-existing ideas.

The SF writer Isaac Asimov first introduced the notion of ethical rules for robots in his 1942 short story "Runaround.” His famous Three Laws of Robotics state the following:

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey any orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

The Laws of War (LOW) and Rules of Engagement (ROE) make programming robots to adhere to Asimov’s Laws far from simple. You want the robots to protect the friendly and “neutralize” enemy combatants. This likely means harming human beings on the battlefield.

In his recent book, Governing Lethal Behavior in Autonomous Robots, Dr. Arkin explores a number of complex real-world scenarios where robots with ethical governors would “do the right thing” –- in consultation with humans on the battlefield. These scenarios include ROE and LOW adherence (Taliban and Iraq), discrimination (Korean DMZ), and proportionality and tactics (urban sniper).

Arkin’s “rules” end up altering Asimov’s rules to look more like these:

1. Engage and neutralize targets as combatants according to the ROE.
2. Return fire with fire proportionately.
3. Minimize collateral damage -- intentionally minimize harm to noncombatants.
4. If uncertain, invoke tactical maneuvers to reassess combatant status.
5. Recognize surrender and hold POW until captured by human forces.

Of course there are serious questions and concerns regarding the just war tradition itself, often
evoked by pacifists. questions the premises on which it is built, and in so doing also
raises some issues that potentially affect autonomous systems. For example he questions “Are
soldiers when assigned a mission given sufficient information to determine whether this is an
order they should obey? If a person under orders is convinced he or she must disobey, will the
command structure, the society, and the church honor that dissent?” Clearly if we embed an
ethical “conscience” into an autonomous system it is only as good as the information upon which
it functions. It is a working assumption, perhaps naïve, that the autonomous agent ultimately will
be provided with an amount of battlefield information equal to or greater than a human soldier is
capable of managing. This seems a reasonable assumption, however, with the advent of
network-centric warfare and the emergence of the Global Information Grid (GIG). It is also
assumed in this work, that if an autonomous agent refuses to conduct an unethical action, it will
be able to explain to some degree its underlying logic for such a refusal. If commanders are
provided with the authority by some means to override the autonomous system’s resistance to
executing an order which it deems unethical, he or she in so doing would assume responsibility
for the consequences of such action. Section 5.2.4 discusses this in more detail.
These issues are but the tip of the iceberg regarding the ethical quandaries surrounding the
deployment of autonomous systems capable of lethality. It is my contention, nonetheless, that if
(or when) these systems will be deployed in the battlefield, it is the roboticist’s duty to ensure
they are as safe as possible to both combatant and noncombatant alike, as is prescribed by our
society’s commitment to International Conventions encoded in the Laws of War, and other
similar doctrine, e.g., the Code of Conduct and Rules of Engagement.

It is anticipated that teams of autonomous systems and human soldiers will work together on the
battlefield, as opposed to the common science fiction vision of armies of unmanned systems
operating by themselves. Multiple unmanned robotic systems are already being developed or are
in use that employ lethal force such as the ARV (Armed Robotic Vehicle), a component of the
Future Combat System (FCS); Predator UAVs (unmanned aerial vehicles) equipped with hellfire
missiles, which have already been used in combat but under direct human supervision; and the
development of an armed platform for use in the Korean Demilitarized Zone [Argy 07,
SamsungTechwin 07] to name a few. Some particulars follow:
• The South Korean robot platform mentioned above is intended to be able to detect and
identify targets in daylight within a 4km radius, or at night using infrared sensors within a
range of 2km, providing for either an autonomous lethal or non-lethal response. Although a
designer of the system states that “the ultimate decision about shooting should be made by a
human, not the robot”, the system does have an automatic mode in which it is capable of
making the decision on its own [Kumagai 07].
• iRobot, the maker of Roomba, is now providing versions of their Packbots capable of
tasering enemy combatants [Jewell 07]. This non-lethal response, however, does require a
human-in-the-loop, unlike the South Korean robot under development.
• The SWORDS platform developed by Foster-Miller is already at work in Iraq and
Afghanistan and is capable of carrying lethal weaponry (M240 or M249 machine guns, or a
Barrett .50 Caliber rifle). [Foster-Miller 07]
• Israel is deploying stationary robotic gun-sensor platforms along its borders with Gaza in
automated kill zones, equipped with fifty caliber machine guns and armored folding shields.
Although it is currently only used in a remote controlled manner, an IDF division
commander is quoted as saying “At least in the initial phases of deployment, we’re going to
have to keep a man in the loop”, implying the potential for more autonomous operations in
the future. [Opall-Rome 07]
• Lockheed-Martin, as part of its role in the Future Combat Systems program is developing an
Armed Robotic Vehicle-Assault (Light) MULE robot weighing in at 2.5 tons. It will be
armed with a line-of-sight gun and an anti-tank capability, to provide “immediate, heavy
firepower to the dismounted soldier”. [Lockheed-Martin 07]
• The U.S. Air Force has created their first hunter-killer UAV, named the MQ-9 Reaper.
According to USAF General Moseley, the name Reaper is “fitting as it captures the lethal
nature of this new weapon system”. It has a 64 foot wingspan and carries 15 times the
ordnance of the Predator, flying nearly three times the Predator’s cruise speed. As of
September 2006, 7 were already in inventory with more on the way. [AirForce 06]
• The U.S. Navy for the first time is requesting funding for acquisition in 2010 of armed
Firescout UAVs, a vertical-takeoff and landing tactical UAV that will be equipped with
kinetic weapons. The system has already been tested with 2.75 inch unguided rockets. The
UAVs are intended to deal with threats such as small swarming boats. As of this time the
commander will determine whether or not a target should be struck. [Erwin 07]



Keine Kommentare:

Kommentar veröffentlichen