In the labs of the University of Ljubljana, Slovenia, researchers are forcing machines to inflict discomfort on humans. But it’s all in a good cause, you see – in order to ensure that robots don’t harm humans by accident, you have to assess what level of harm is unacceptable.
Borut Povše […] has persuaded six male colleagues to let a powerful industrial robot repeatedly strike them on the arm, to assess human-robot pain thresholds.
It’s not because he thinks the first law of robotics is too constraining to be of any practical use, but rather to help future robots adhere to the rule. “Even robots designed to Asimov’s laws can collide with people. We are trying to make sure that when they do, the collision is not too powerful,” Povše says. “We are taking the first steps to defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans.”
Povše and his colleagues borrowed a small production-line robot made by Japanese technology firm Epson and normally used for assembling systems such as coffee vending machines. They programmed the robot arm to move towards a point in mid-air already occupied by a volunteer’s outstretched forearm, so the robot would push the human out of the way. Each volunteer was struck 18 times at different impact energies, with the robot arm fitted with one of two tools – one blunt and round, and one sharper.
[…]
The team will continue their tests using an artificial human arm to model the physical effects of far more severe collisions. Ultimately, the idea is to cap the speed a robot should move at when it senses a nearby human, to avoid hurting them.
I can sympathise with what they’re trying to achieve here, but it strikes me (arf!) as a rather bizarre methodology. If I were more cynical than I am*, I might even suggest that this is something of a non-story dolled up to attract geek-demographic clickthrough…
… in which case, I guess it succeeded. Fie, but complicity weighs heavy upon me this day, my liege!
[ * Lucky I’m not cynical, eh? Eh? ]