“A great science fiction detective story” - Ian Watson, author of The Universal Machine
Now Available for Instant Download! Click for details.
Earlier this month HA reported that four British professional and scientific bodies had issued a joint report regarding their concerns about the potential pitfalls of augmented humanity (British Academies Raise Alarm About ‘Souped-Up’ Humanity).
Now Human Rights Watch has issued a 50-page report urging national and international legislation pre-emptively banning “killer robots,” by which they mean weapons of war that are able to autonomously make life-and-death decisions with no input from a human being.
As with the report on human augmentation, I have made the Killer Robots report available as a free, downloadable PDF in the Homo Artificialis Library (HAL), filed under Ethics and Homo Artificialis.
As Raw Story reports in its news item on the report, the weapons in question aren’t yet deployed, but they are in development:
Such weapons do not yet exist, and major powers, including the US, have not decided to deploy them. But precursors are already being developed. The US, China, Germany, Israel, South Korea, Russia, and Britain are engaged in researching and developing such weapons.
The Report, wisely, not only proposes legislative solutions, which can sometimes reflect the realities of the political landscape more than the issue at hand, but also a grassroots approach rooted in professional ethics, urging roboticists themselves to generate a code of conduct, tasking them to:
Establish a professional code of conduct governing the research and development of autonomous robotic weapons, especially those capable of becoming fully autonomous, in order to ensure that legal and ethical concerns about their use in armed conflict are adequately considered at all stages of technological development.
Military applications of advanced technology are inevitable–indeed, much advanced technology begins life as a military project, for instance within the Defense Advanced Research Projects Agency (DARPA). This has several consequences, among them:
- as with any technology, there is the potential for error or abuse, but because of the military context this can result in serious injury or death,
- there is likely to continue to be a trickle-down effect in which military applications migrate to civilian applications, like law enforcement and civil security, that also have the potential for error or abuse resulting in serious injury or death, and
- the first two issues raise the possibility for an alarmist backlash that ends up limititing the positive, beneficial effects such technology can have (and, as we know from laws ostensibly intended to curb the pirating of intellectual property, we are sometimes likely to get all the bad consequences of such a measure without it actually accomplishing its stated goal).
Many readers of this page are, on balance, optimists regarding the life-enhancing potential of technology. Clearly, though, recognizing the immense benefits that have come from technology and that will continue to flow from it shouldn’t mean being naive regarding possible negative consequences. If those consequences are going to be minimized (along with the potential anti-technological backlash) then we have to engage with these issues in a constructive way.
I haven’t yet read the report, so I haven’t decided if it’s sensible and constructive, alarmist and over-reaching, or a bit of both, but if we’re going to act constructively then killer robots isn’t a bad place to start.
You can watch the Human Rights Watch video on the topic, below.