Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:
We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.
[…]
It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”
Of course, the easiest way to avoid rogue killer robots would be to build less of them.
*tumbleweed*