Paul Raven @ 23-02-2009

Terminator statueTo brighten your Monday morning, here’s some speculation on robot morality – though not one of the usual sources. Nick Carr bounces off a Times Online story about a report from the US Office of Naval Research which “strongly warns the US military against complacency or shortcuts as military robot designers engage in the ‘rush to market’ and the pace of advances in artificial intelligence is increased.”

Carr digs into the text of the report itself [pdf], which demonstrates a caution somewhat at odds with the usual media image of the military-industrial complex:

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The report goes on to consider potential training methods, and suggests that some sort of ‘moral programming’ might be the only way to ensure that our artificial warriors don’t run amok when exposed to the unpredictable scenario of a real conflict. Perhaps Carr is a science fiction reader, because he’s thinking beyond the obvious answers:

Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

It’s a tricky question; essentially the military want to have their cake and eat it, replacing fallible meat-soldiers with reliable mechanical replacements that can do all the clever stuff without any of the attendant emotional trickiness that the ability to do clever stuff includes as part of the bargain. [image by Dinora Lujan]

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars – in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?

Be Sociable, Share!

4 Responses to “BattlefieldMorality2.0”

  1. Mike Treder says:

    Excellent final questions, Paul!

  2. Bruce Cohen (SpeakerToManagers) says:

    Very good questions. I’d go even farther and ask if it isn’t the case that the nature of war as a chaotic destroyer _requires_ an agent capable of moral choice on the scene, to ensure that immoral or just incorrect decisions at the tactical level don’t undermine mission objectives, strategy, and the political objectives of the overall conflict.

  3. Sterling Camden says:

    Bingo. We can’t have our cake and behead Marie Antionette.

  4. Sterling Camden says: