Following on neatly from Tom’s post about the Pentagon’s future war brainstorms and the US Office of Naval Research’s recent report on battlebot morality, philosopher A C Grayling takes to his soapbox at New Scientist to warn us that we need to regulate the use of robots for military and domestic policing uses now… before it’s too late.
In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?
The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.
As has been pointed out before, the appeal of robots to the military mind seems to be that they’re a form of moral short-cut, a way to do the traditional tasks of battle and control without risking the lives of real people. But as Grayling says, that’s a short-sighted approach: it’s not a case of wondering if things will go wrong, but when… and then who will carry the can?
Call me a cynic, but I doubt the generals and politicians will be any keener to shoulder the blame for mistakes than they already are. [image by jurvetson]
One thought on “Regulating military robots”
It’s all just “collateral damage”.
Comments are closed.