Tag Archives: robots

Won’t somebody think of the robots?

robot horse Jamais Cascio is a sensitive soul; he doesn’t like seeing beasts of burden being abused and pushed around. Even robotic ones:

“My reaction to seeing this robot kicked paralleled what I would have had if I’d seen a video of a pack mule or a real big dog being kicked like that, and (from anecdotal conversations) I know I’m not the only one with that kind of immediate response. True, it wasn’t nearly as strong a shocked feeling for me as it would have been with a real animal, but it was definitely of the same character. It simply felt wrong.”

This throws an interesting light on the “robot rights” debates that keep surfacing. While I think we can all agree that a non-sentient machine doesn’t require the vote or union-mandated coffee breaks, this sort of psychological reaction to machines with a visual semblance of life may cause problems in early-adopter workplaces. [image by TwoBlueDay]

After all, even battle-hardened US Army colonels have been known to balk at sending machines to their doom.

Asimov’s Three Laws of Weigh-Ins

Alice Wang's half-truth scale Isaac Asimov‘s robot stories were based around his famous Three Laws of Robotics, the first of which states that a robot may not injure a human being.

Asimov got lots of stories out of the many unanticipated behaviours his three laws might provoke in robots under various scenarios. His robots, though, were high-tech sentient creatures with “positronic brains.” I don’t think he ever contemplated applying his laws to everyday household products.

Designer Alice Wang has, though, and regarding Asimov’s First Law, wonders, “Are there existing domestic objects that already break this law?”, and comes up with a surprising answer–bathroom scales:

Scales, although don’t perform physical harm, have been subtly damaging us psychologically. Should objects like these exist in a complex society like ours where people are more emotionally fragile?

She has therefore designed three scales that might reduce the emotional harm caused by the mean old scale. The first, called white lies, allows the person being weighed to lie to him or herself: the further back you stand on it, the lighter you become. “The user can gradually move closer and closer to reality,” she notes. (Via Gizmodo.)

The second, called half-truth, can only be read by a person who is not on the scale: its readout is at the front edge, perpendicular to the floor. “Suitable for cohabiting partners,” notes Wang.

Finally, there’s open secrets, which doesn’t show you your weight at all: it sends a text message to a specified mobile phone, instead. The recipient of the message can then decide whether to share your weight with you immediately, the next time you meet–or not at all. “Suitable for pre-cohabiting couples,” says Wang.

Up next: the Heinleinian Starship Troopers scale, which will only consent to weigh you if you first serve two years in the military.

(Photo: Alice Wang.)

[tags]Asimov,robots,technology,design[/tags]

A soft spot for hardware – the future of human/robot intercourse

robot-masks The robotic love-slave- it’s a science fiction trope as old as the hills, but that doesn’t stop it getting dragged out of retirement by the occasional academic … not to mention science and technology websites looking for a humorous Valentines Day item. Ahem. [Image by kaibara87]

Cosmos Magazine talks to David Levy, a professor of gender studies and artificial intelligence, about what he sees as the inevitability of robot lovers:

“[He] is convinced the demand is there and that market forces will provide the financial drive to overcome any technical – or psychological – obstacles. “It is only a matter of time before someone in the adult entertainment industry, which is awash in money, thinks, ‘Gee, I could make a pile of money’,” he says.”

The less charitable might possibly conclude that a similar line of thought may have given rise to Levy’s book …

I’m particularly fond of The Holy Machine by Chris Beckett, a science fiction novel that deals with a man falling in love with an android prostitute; it also has a whole lot to say about the conflict between science and religion, and a redemptive ending with zero schmaltz.

Any robot romance reading recommendations from the audience?

Robots evolve ability to lie…and be heroes

Robots feeding There’s been lots of discussion here about how we should treat robots; maybe we need to consider how robots will treat each other–and, potentially, us. (Via Gizmodo.)

Discover Magazine reminds us, in its review of the Top 100 Science Stories of 2007, that Dario Floreano and colleagues at the Laboratory of Intelligent Systems at the Swiss Federal Institute of Technology created robots with light sensors, rings of blue light and wheels, placed them in habitats containing both glowing “food patches” that recharged their batteries and patches of “poison” that drained them, and gave them software genes that determined how much they sensed light and how they responded. The first batch were programmed to light up randomly and move randomly when they sensed light. The “genes” of the most successful first-generation robots were then recombined and given to the next generation, with a little random “mutation” thrown in. By the 50th generation, they had robots that would light up to alert other robots when they found food or poison…and in one of the four colonies of robots they created, they had “cheater” robots that would lie and tell other robots that poison was food, while they rolled over to a food patch themselves without signalling at all. Other robots, though, were heroes: they would signal danger when they found the poison and die so other robots could safely obtain food.

Liars and heroes in just 50 generations with just 30 genes. Maybe we really will soon need a robot psychologist a la Isaac Asimov’s character Susan Calvin to figure out why our robots do what they do.

The original research paper, published in Current Biology, is here, and there’s even a movie.

(Image: Laboratory of Intelligent Systems.)

[tags]robots, technology, ethics[/tags]

Robotic luggage follows you around, doesn’t eat annoying people…yet

Robot Suitcase No, it doesn’t have little feet, and it doesn’t occasionally eat annoying people, but otherwise this Russian-invented luggage that follows its owner around sure sounds like the luggage belonging to Rincewind the Wizzard in Terry Pratchett’s novels: (Via Sci Fi Tech.)

Russian specialists intend to become first in the world to launch mass production of robots-suitcases that are able to follow their owner in footsteps. In order to make the mechanism follow its owner, it is enough for the person to put a sensor-card into a pocket and the suitcase will dutifully roll after the owner.

A gyroscope, light sensitive detectors, ultrasound and infrared sensors help the smart suitcase bypass obstacles, to roll in conditions of an inclined surface, and to stop when stumbling upon the edges of staircases and balconies. The robot-suitcase’s accumulator charge is said to be enough for non-stop operation during 2 hours.

The suitcase developers (Robotronic.ru) have given the mechanism a human name – Tony.

The plan is for the suitcase to be available in 2009 for around $1,960 U.S. (Image from Robotronic.ru.)

[tags]robots, novels, technology, Russia[/tags]