Tag Archives: military

Regulating military robots

triple-gun robot droneFollowing on neatly from Tom’s post about the Pentagon’s future war brainstorms and the US Office of Naval Research’s recent report on battlebot morality, philosopher A C Grayling takes to his soapbox at New Scientist to warn us that we need to regulate the use of robots for military and domestic policing uses now… before it’s too late.

In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?

[snip]

The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.

As has been pointed out before, the appeal of robots to the military mind seems to be that they’re a form of moral short-cut, a way to do the traditional tasks of battle and control without risking the lives of real people. But as Grayling says, that’s a short-sighted approach: it’s not a case of wondering if things will go wrong, but when… and then who will carry the can?

Call me a cynic, but I doubt the generals and politicians will be any keener to shoulder the blame for mistakes than they already are. [image by jurvetson]

Revealed: Pentagon predicts wars of the future

viperfull_2The proud journos at TPMMuckraker have managed to acquire the titles of various Pentagon Office of Net Assessment reports through a Freedom of Information request. Here’s what’s been on their minds:

The Great Siberian War Of 2030

The Revival Of Chinese Nationalism: Challenges To American Ideals

The Future Of Undersea Warfare

Chinese And Russian Asymmetrical Strategies For Space Dominance (2010-2030)

That last one is relevant to the recent news of a military (but possibly not weapons-carrying, what with the Outer Space Treaty [thanks commenter Kian]) Chinese space station.

The whole list is here.

As the actual content of the reports is still classified we can amuse ourselves by wondering what Biometaphor For The Body Politic [March 2006] refers to. It sounds like a description of someone explaining the Facts of Life with handpuppets.

[via Danger Room][image also from Danger Room]

Chinese to launch military space station

china_stationThe Chinese government has announced its intention to launch two space stations over the next two years, one for civil use and one for military activities:

The design, revealed to the Chinese during a nationally televised Chinese New Year broadcast, includes a large module with docking system making up the forward half of the vehicle and a service module section with solar arrays and propellant tanks making up the aft.

The concept is similar to manned concepts for Europe’s Automated Transfer Vehicle.

While used as a target to build Chinese docking and habitation experience, the vehicle’s military mission has some apparent parallels with the U.S. Air Force Manned Orbiting Laboratory (MOL) program cancelled in 1969 before it flew any manned missions. MOL’s objectives were primarily reconnaissance and technology development.

This is all due to happen in the same year that NASA is phasing out the space shuttle: how will Chinese progress in space affect US space policy?

[from SPACE.com via Slashdot][image from SPACE.com]

BattlefieldMorality2.0

Terminator statueTo brighten your Monday morning, here’s some speculation on robot morality – though not one of the usual sources. Nick Carr bounces off a Times Online story about a report from the US Office of Naval Research which “strongly warns the US military against complacency or shortcuts as military robot designers engage in the ‘rush to market’ and the pace of advances in artificial intelligence is increased.”

Carr digs into the text of the report itself [pdf], which demonstrates a caution somewhat at odds with the usual media image of the military-industrial complex:

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The report goes on to consider potential training methods, and suggests that some sort of ‘moral programming’ might be the only way to ensure that our artificial warriors don’t run amok when exposed to the unpredictable scenario of a real conflict. Perhaps Carr is a science fiction reader, because he’s thinking beyond the obvious answers:

Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

It’s a tricky question; essentially the military want to have their cake and eat it, replacing fallible meat-soldiers with reliable mechanical replacements that can do all the clever stuff without any of the attendant emotional trickiness that the ability to do clever stuff includes as part of the bargain. [image by Dinora Lujan]

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars – in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?

Homebrew UAV: arduino

curvy_gridFlash forward 20 years. Everyone has access to an open-source personal rapid prototyper (notwithstanding a fabber equivalent of Bill Gates…) and can rustle up one of these homebrew UAVs: at the drop of a futuristic ambient computer thing:

Combined with a RC plane, this makes it easy to build a complete UAV for less than $500, which is really kind of amazing. As exciting as that it is, it’s also sobering to know that a technology that was just a few years ago the sole domain of the military is now within the reach of amateurs…

As Charles Stross points out, ready-to print Saturday night specials could be only a decade away, and along with the UAVs and the fabbers it makes the next few years an interesting time to be alive.

[via Warren Ellis][image from tanakawho on flickr]