Rebellious robots: how likely is the Terminator scenario?

Paul Raven @ 18-01-2011

Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:

We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.

[…]

It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”

Of course, the easiest way to avoid rogue killer robots would be to build less of them.

*tumbleweed*


Sexbots sashaying across the Uncanny Valley

Paul Raven @ 26-01-2010

2010 is shaping up to be a busy year in robotics, if the number of robo-related posts flowing through my RSS pipes are anything to go by. Here are just a handful of ’em for you…

First of all, nascent sexbot company TrueCompanion debuted Roxxxy [see image] at the AVN Adult Entertainment Expo at Vegas just after the new year [via SlashDot and Technovelgy]:

“She can’t vacuum, she can’t cook but she can do almost anything else if you know what I mean,” TrueCompanion’s Douglas Hines said while introducing AFP to Roxxxy.

Nudge nudge, wink wink, say no more.

“She’s a companion. She has a personality. She hears you. She listens to you. She speaks. She feels your touch. She goes to sleep. We are trying to replicate a personality of a person.”

Roxxxy stands five feet, seven inches tall, weighs 120 pounds, “has a full C cup and is ready for action,” according to Hines, who was an artificial intelligence engineer at Bell Labs before starting TrueCompanion.

[…]

Roxxxy comes with five personalities. Wild Wendy is outgoing and adventurous, while Frigid Farrah is reserved and shy.

There is a young naive personality along with a Mature Martha that Hines described as having a “matriarchal kind of caring.” S & M Susan is geared for more adventurous types.

Aspiring partners can customize Roxxxy features, including race, hair color and breast size. A male sex robot named “Rocky” is in development.

Somehow, I find Hines a bit more creepy than Roxxxy. And if you find the notion of people building sexbots a little odd, wait until you hear Hines’ motivations for creating her…

Inspiration for the sex robot sprang from the September 11, 2001 attacks, when planes crashed into the World Trade Center in New York City, the Pentagon and an empty field in Pennsylvania.

“I had a friend who passed away in 9/11,” Hines said. “I promised myself I would create a program to store his personality, and that became the foundation for Roxxxy True Companion.”

Ummm, OK…

Meanwhile, South Korean roboticists are focussing on more, ah, domestic applications as they work on building a walking robot housemaid:

Mahru-Z has a human-like body including a rotating head, arms, legs and six fingers plus three-dimensional vision to recognise chores that need to be tackled, media reports said Monday.

“The most distinctive strength of Mahru-Z is its visual ability to observe objects, recognise the tasks needed to be completed, and execute them,” You Bum-Jae, head of the cognitive robot centre at the Korea Institute of Science and Technology, told the Korea Times.

“It recognises people, can turn on microwave ovens, washing machines and toasters, and also pick up sandwiches, cups and whatever else it senses as objects.”

Ideal for the frat-house with money to spare, then. But careful programming is of the essence if we’re to live side by side with robots, as is a legal framework that accomodates the ethical and social grey areas that our mechanical servants will bring with them [via Cheryl Morgan]:

Driverless cars may be one of the more gentle uses of robotics but even they will need a host of new rules written to help them fit smoothly into our society.

Take questions of insurance, for example – in the event of an accident, who do you hold responsible? If the crash involves an artificially intelligent robot, do you blame its creator, or the robot that can think for itself?

It’s a problem that would apply to any autonomous robot large enough to do accidental or erroneous damage to humans or property, according to Sharkey. “[It’s] going to be the same with any robot in the public domain that’s independent. Who’s accountable? Who’s responsible?”

There would also be the issue of which humans associated with the robot would be blamed for any misuse…

“There could be a very long chain of accountability,” he added. “The manufacturer, the person who deployed it, the person who’s using it currently. If I’m irresponsible with my autonomous car is it my fault? That’s one of the problems with it.”

And then there are the robots that are actually designed to damage people on purpose – there’s a whole raft of ethical OMGWTF wrapped up with military robotics (as we’ve discussed here before):

While robot fighters may remain on every military’s must-have list, the structures needed to define how such armed and potentially deadly autonomous agents should be used and not used are not yet in place.

“This is not science fiction anymore,” said Ron Chrisley, professor of philosophy at Sussex University. “This is really a pressing question – because in particular the US military is building more and more artificial systems that are going to be responsible for in some sense deciding whether or not to bomb co-ordinates or something. Now we need to get ethical principles in place to say, well, even if this system is in some sense responsible that doesn’t mean that this other system – namely the people who deployed it – are not also responsible.”

“I would hope that in the very near future a very rich field of machine ethics, machine-human ethics starts developing,” he added.

Looks like not everyone has heard about Roxxxy, however:

“I’m surprised frankly that the sex industry hasn’t yet cottoned on to robotics,” the University of the West of England’s Winfield said.

“For better or for worse, whatever your opinion on the subject, it is true that the sex industry has been responsible for a good deal of innovation on the internet, in terms of web technologies and so on,” he added.

Sex with robots is inevitable, in Sheffield University’s Sharkey’s view. Marriage, however, is not, according to another AI researcher, David Levy.

“I don’t agree with him that people will marry robots, except slightly perverted people. I can’t imagine you’d want to marry it but certainly robots will be used in the sex industry, there’s no doubt about that. And you could think of that as dystopian – I would. But people have sex with dolls, so you just make the doll move a little bit and you’ve got a robot.

Levy’s theories sound a little weird at first, but he’s very persuasive – not in a sleazy way, but in the manner of someone who really seems to have thought things through. Only time will tell whether he’s right, of course… but I wouldn’t bet against him at the moment, for whatever that’s worth.

Last but not least, the Uncanny Valley of the title is a well-known buzz-phrase, at least among the geeky sort of circles that read this site… but it may also be a completely bankrupt theory. There’s certainly no research that supports it, according to Popular Mechanics:

Despite its fame, or because of it, the uncanny valley is one of the most misunderstood and untested theories in robotics. While researching this month’s cover story […] about the challenges facing those who design social robots, we expected to spend weeks sifting through an exhaustive supply of data related to the uncanny valley—data that anchors the pervasive, but only loosely quantified sense of dread associated with robots. Instead, we found a theory in disarray. The uncanny valley is both surprisingly complex and, as a shorthand for anything related to robots, nearly useless.

I know that I can vouch for the occasional creepiness of humanoid robots (not to mention metaverse avatars, which can be alarmingly ultrarealistic), but I guess it’s a tricky thing to quantify and measure… because it seems to be a predominantly remote effect:

According to all of the roboticists and computer scientists we interviewed, the uncanny is in short supply during face-to-face contact with robots. Two of the robots that inspire the most terror—and accompanying YouTube comments—are Osaka University’s CB2, a child-like, gray-skinned robot, and KOBIAN, Waseda University’s hyper-expressive humanoid. In person, no one rejected the robots. No one screamed and threw chairs at them, or smiled politely and slipped out to report lingering feelings of abject horror. In one case, a local Japanese newspaper tried to force the issue, bringing a group of seniors to visit the full-lipped, almost impossibly creepy-looking KOBIAN. One senior nearly cried, claiming that she felt like the robot truly understood her. A previously skeptical journalist wound up smiling and cuddling with the ominous little CB2. The only exception was a princess from Thailand, who couldn’t quite bring herself to help CB2 to its robotic feet.

Royalty notwithstanding, the uncanny effect appears to be an incredibly specific and specialized phenomenon: It seems to happen, when it does, remotely. In person, the uncanny vanishes. There’s nothing in the way of peer-reviewed evidence to support this, but then, there’s almost nothing to confirm the uncanny effect’s existence in the first place. As an unsupported theory that has morphed into a nerdy breed of urban legend, anecdotes are all we have to work with.

I expect we’ll discover a whole new load of phobias and neuroses when humanoid robots are more commonplace. How long it’ll be before that happens is an open question, but I’d suggest that the next decade will see robots invading our homes and workplaces in ever greater numbers. So smile and be friendly… but keep your multitool handy, OK?


Machine-making machines making more machine-making machines…

Paul Raven @ 29-10-2009

RepRap 'Mendel' self-replicating machineVia Michael Anissimov we hear that the second generation of the RepRap self-replication machine, codenamed “Mendel”, is nearly ready for public release. Meaning that you could buy one (if you found someone who’d sell you one), but you could also build your own from the free open-source plans found at the RepRap website; the parts will cost you around US$650. [image from the RepRap wiki]

While such homebrew 3d printers aren’t currently much use for high-detail work and commercial finishes (like reproductions of your favourite World of Warcraft critters, maybe), they can make functional devices without any major problems. If there really is an increase in demand, you could probably assemble a Mendel and set it up to simple print a copy of itself. Then set up the copy to do the same, get a few generations of fully-functioning clones built, and then start churning ’em out and selling them to local buyers…

Integrating electrical and electronic circuits into physical parts is beyond these current home fabbers, but where big industry leads the homebrew crew will follow. Xerox has just invented a conductive silver ink that works without the need for a clean-room environment, meaning you can print off circuits onto a flexible substrate just like any other continuous-feed document. It wouldn’t take much for someone to buy some of that ink and find a way to use it in cheap and/or homebrew kit… hey presto, you’ve suddenly got the capability to replicate the electronic parts of a more complex self-replicating machine.

So go a bit further, integrate the two functions, have one machine that can print both inert blocks and electronics. Now we’re cooking! Now shrink ’em down, maybe speciate them so that different versions are specialised toward specific types of printing or assembly. But you’ll need to train them to pass off tasks they can’t do onto a machine that can, so you give them some sort of rudimentary swarm intelligence that communicates over something like Bluetooth… and then all of a sudden you’ve got an anthill of mechanical critters that have learned to procreate, cooperate, and deceive. DOOM.

Yeah, I know, it’s not very likely – but allow a guy a flight of robo-dystopian fancy on a Thursday, why don’t you? 🙂


Learning to love (or hate) emotional machines

Paul Raven @ 06-07-2009

Ninety percent of human communication is non-verbal, so the old cliche goes – and as such computer science types are constantly looking for new ways to widen the bandwidth between ourselves and our machines. Currently making a comeback is the notion of computers that can sense a human’s emotional state and act on it accordingly.

Outside of science fiction, the idea of technology that reads emotions has a brief, and chequered, past. Back in the mid-1990s, computer scientist Rosalind Picard at the Massachusetts Institute of Technology suggested pursuing this sort of research. She was greeted with scepticism. “It was such a taboo topic back then – it was seen as very undesirable, soft and irrelevant,” she says.

Picard persevered, and in 1997 published a book called Affective Computing, which laid out the case that many technologies would work better if they were aware of their user’s feelings. For instance, a computerised tutor could slow down its pace or give helpful suggestions if it sensed a student looking frustrated, just as a human teacher would.

Naturally, there’s a raft-load of potential downsides, too:

“The nightmare scenario is that the Microsoft paperclip starts to be associated with anything from the force with which you’re typing to some sort of physiological measurement,” says Gaver. “Then it pops up on your screen and says: ‘Oh I’m sorry you’re unhappy, would you like me to help you with that?'”

I think I’m safe in saying no one wants to be be shrunk by Clippy.

Emotion sensors could undermine personal relationships, he adds. Monitors that track elderly people in their homes, for instance, could leave them isolated. “Imagine being in a hurry to get home and wondering whether to visit an older friend on the way,” says Gaver. “Wouldn’t this be less likely if you had a device to reassure you not only that they were active and safe, but showing all the physiological and expressive signs of happiness as well?”

That could be an issue, but it’s not really the technology’s fault if people choose to over-rely on it. This is more worrying, though:

Picard raises another concern – that emotion-sensing technologies might be used covertly. Security services could use face and posture-reading systems to sense stress in people from a distance (a common indicator a person may be lying), even when they’re unaware of it. Imagine if an unsavoury regime got hold of such technology and used it to identify citizens who opposed it, says Picard.

That’s not really much of an imaginatory stretch, at least not here in the CCTV-saturated UK. But the same research that enables emotional profiling will doubtless reveal ways to confuse or defeat it; perhaps some sorts of meditation exercises could help control your physiology? Imagine the tools and techniques of the advanced con-man turned into survival skills for political dissidents…