Your futureshock phobia headline of the day, courtesy of Wired UK: “Snake-bots slither inside your body during surgery“. Aaaaaaaaaaaaaaaaaaaaargh!
Tag Archives: robots
Brixton reimagined as favela for robot workers
Urban futurism, offered without comment: via the incomparable BLDGBLOG, this image by the wonderfully-monicker’d Kibwe X-Kalibre Tavares is called “Southwyck House”, and is part of a set of similar images “of what Brixton could be like if it were to develop as a disregarded area inhabited by London’s new robot workforce […] the population has rocketed and unplanned cheap quick additions have been made to the skyline.”
[Click the image to see the original in bigger sizes on Flickr; all rights are reserved by Tavares, and the image is reproduced here under Fair Use terms. Please contact for immediate take-down if required.]
My first thought on seeing that? Kowloon Walled City. Dense urban populations lead inevitably to an increased density of marginal and/or interstitial regions…
Rebellious robots: how likely is the Terminator scenario?
Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:
We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.
[…]
It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”
Of course, the easiest way to avoid rogue killer robots would be to build less of them.
*tumbleweed*
Building robots building robot buildings…
Behold the potential future of building; construction workers, you may want to start training for your second career NOW.
Oh, so you’re not impressed by that? OK, so imagine large swarms of smaller versions of those quadrotor critters assembling constructions which themselves are autonomous, modular, quasi-sentient and self-repairing…
From BotJunkie, via George Dvorsky; cheers, George. 🙂
To obey Asimov’s First Law effectively, we must first break it
In the labs of the University of Ljubljana, Slovenia, researchers are forcing machines to inflict discomfort on humans. But it’s all in a good cause, you see – in order to ensure that robots don’t harm humans by accident, you have to assess what level of harm is unacceptable.
Borut Povše […] has persuaded six male colleagues to let a powerful industrial robot repeatedly strike them on the arm, to assess human-robot pain thresholds.
It’s not because he thinks the first law of robotics is too constraining to be of any practical use, but rather to help future robots adhere to the rule. “Even robots designed to Asimov’s laws can collide with people. We are trying to make sure that when they do, the collision is not too powerful,” Povše says. “We are taking the first steps to defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans.”
Povše and his colleagues borrowed a small production-line robot made by Japanese technology firm Epson and normally used for assembling systems such as coffee vending machines. They programmed the robot arm to move towards a point in mid-air already occupied by a volunteer’s outstretched forearm, so the robot would push the human out of the way. Each volunteer was struck 18 times at different impact energies, with the robot arm fitted with one of two tools – one blunt and round, and one sharper.
[…]
The team will continue their tests using an artificial human arm to model the physical effects of far more severe collisions. Ultimately, the idea is to cap the speed a robot should move at when it senses a nearby human, to avoid hurting them.
I can sympathise with what they’re trying to achieve here, but it strikes me (arf!) as a rather bizarre methodology. If I were more cynical than I am*, I might even suggest that this is something of a non-story dolled up to attract geek-demographic clickthrough…
… in which case, I guess it succeeded. Fie, but complicity weighs heavy upon me this day, my liege!
[ * Lucky I’m not cynical, eh? Eh? ]