We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.
[…]
It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”
Of course, the easiest way to avoid rogue killer robots would be to build less of them.
*tumbleweed*
Comments Off on Rebellious robots: how likely is the Terminator scenario?
Joking aside, this is quite a big deal – energy-autonomous machines could do all sorts of amazing things, and some scary ones too. It also stirs up the same arguments about “artificial life” as the Venter announcement, albeit coming from a very different angle: if I remember my GCSE biology right, eating and excreting are two pillars of the scientific definition of biological life, and there’s a machine that does both as well as being capable of independent movement. Interesting times, people, interesting times.
Professor Stewart and report co-author Chris Elliott remain convinced that autonomous systems will prove, on average, to be better surgeons and better lorry drivers than humans are.
But when they are not, it could lead to a legal morass, they said.
“If a robot surgeon is actually better than a human one, most times you’re going to be better off with a robot surgeon,” Dr Elliott said. “But occasionally it might do something that a human being would never be so stupid as to do.”
Professor Stewart concluded: “It is fundamentally a big issue that we think the public ought to think through before we start trying to imprison a truck.”
And when and if true AI or artificial general human-level intelligences show up, will they commit crimes, and if so, who will be responsible?
Despite some freaky-looking androids coming out of Japan, we have yet to develop robots that can reproduce complex autonomous human behaviours. Perhaps the problem is that we’re aiming too high?
Rather than try to replicate human intelligence, in all its furious complexities and higher levels of language and reasoning, it would be better to start at the bottom and figure out simpler abilities that humans share with other animals, they say.
These include navigating, seeking food and avoiding dangers.
And, for this job, there can be no better inspiration than the rat, which has lived cheek-by-whisker with humans since Homo sapiens took his first steps.
“The rat is the animal that scientists know best, and the structure of its brain is similar to that of humans,” says Steve Nguyen, a doctoral student at ISIR, who helped show off Psikharpax at a research and innovation fair in Paris last week.
The goal is to get Psikharpax to be able to “survive” in new environments. It would be able to spot and move around things in its way, detect when it is in danger from collision with a human in its vicinity and spot an opportunity for “feeding” — recharging its battery at power points placed around the lab.
“We want to make robots that are able to look after themselves and depend on humans as least as possible,” said Guillot.
Seems like a good idea… provided they don’t build in the natural rodent propensity for rapid reproduction. [via GlobalGuerillas; image borrowed from linked PhysOrg article]
NEW FICTION: WORLD IN PROGRESS by Lori Ann White: He vaults effortlessly to the smooth countertop and turns to the sea of faces. It’s soapbox time, ready to rant, but he spots a wake in the sea, Bouncer Babe tossing patrons aside, closing fast. He slaps at his waist, and feedback screams through the club. Everyone, including the bouncer, just–stops.