rfmcdonald: (Default)
[personal profile] rfmcdonald
Whenever I've mentioned this Volokh Conspiracy news item, posted by Kenneth Anderson, to my friends, they've reacted by invoking the humanoid robots of Terminator 2. Thankfully, the source article doesn't give a hint of that.

[I]magine robots that obey injunctions like Immanuel Kant’s categorical imperative — acting rationally and with a sense of moral duty. This July, the roboticist Ronald Arkin of Georgia Tech finished a three-year project with the U.S. Army designing prototype software for autonomous ethical robots. He maintains that in limited situations, like countersniper operations or storming buildings, the software will actually allow robots to outperform humans from an ethical perspective.

“I believe these systems will have more information available to them than any human soldier could possibly process and manage at a given point in time and thus be able to make better informed decisions,” he says.

The software consists of what Arkin calls “ethical architecture,” which is based on international laws of war and rules of engagement.


Anderson's worried about this idea, on methodological grounds.

Although I am strongly in favor of the kinds of research programs that Professor Arkin is undertaking, I think the ethical and legal issues, whether the categorical rules or the proportionality rules, of warfare involve questions that humans have not managed to answer at the conceptual level. Proportionality and what it means when seeking to weigh up radically incommensurable goods — military necessity and harm to civilians, for example — to start with in the law and ethics of war. One reason I am excited by Professor Arkin’s attempts to perform these functions in machine terms, however, is that the detailed, step by step, project forces us to think through difficult conceptual issues regarding human ethics at the granular level that we might otherwise skip over with some quick assumptions. Programming does not allow one to do that quite so easily.

And it is open to Professor Arkin to reply to the concern that humans don’t have a fully articulated framework, even at the basic conceptual level, for the ethics of warfare, so how then is a machine going to do it? “Well, in order to develop a machine, I don’t actually have to address those questions or solve those problems. The robot doesn’t have to have more ethical answers than you humans — it just has to be able to do as well, even with the gaps and holes.” I’m not sure that answer (which I’m putting into Professor Arkin’s mouth entirely hypothetically, let me emphasize) would be sufficient — partly because I suspect that intuitions applied casuistically by human beings often encode and respond to facts that affect our ethical senses in ways that would not really be articulable, by human or machine. And partly because we probably do think that in various ways, the machine has to be better than the human.


Is he right? I'd like to believe that he's not, but humans hardly start out from a blank slate without any ethics-biasing inheritances.
Page generated Mar. 23rd, 2026 01:37 am
Powered by Dreamwidth Studios