I think we’ve got an early candidate for futurist talking-point of the week right here: researchers from New York’s Rensselaer Polytechnic Institute have developed artificial intelligence software that appears to possess a rudimentary “theory of mind” – a cognitive ability not manifest in human children until the age of four or five. [image from NewWorldNotes]
The researchers are using the software to control a Second Life avatar called Eddie:
“Two avatars controlled by humans stand with Eddie next to one red and one green suitcase. One human avatar then leaves and while they are gone the remaining human avatar moves the gun from the red suitcase into the green one.
Eddie is then asked where the character that left would look for the gun. The AI software correctly realises they will look in the red suitcase.”
Doesn’t sound too impressive at first, but it’s being hailed as a significant advance in the capabilities of artificially intelligent software by some – though others are less impressed, as Eddie’s reasoning engine has to be seeded with a simple logical statement before he can pass the test.
Even so, the Rensselaer guys reckon it’ll be great for making games with more realistic computer-controlled enemies … but I imagine there’s a number of people in the assorted military-industrial complexes of the globe thinking waaaay bigger than that right now.
Researchers at the University of California San Diego have discovered that it doesn’t take much to get toddlers to accept a robot as just another kid. (Via New Scientist.)
They put a 60 cm-tall robot called QRIO (pronounced “curio”) into a classroom with a dozen toddlers (video here) and programmed it to giggle when its head was touched, to occasionally sit down, and to lie down when its batteries dies. A human operator could also make it look at a child, or wave as one went away. Over several weeks, the toddlers began interacting with QRIO pretty much the same way they did with other toddlers. They’d even help it up when it fell, and when its batteries died and it lay down, they’d cover it with a blanket and say “night, night.” (Awwww….)
There’s been a lot of recent research on trying to make the robot-human interaction better. Researchers have also taught a robot to dance to a beat, or to a partner’s movement, and are working on giving robots a sense of humor. Add in the martial-arts robots of a few years ago and that robot that conducted a Beethoven symphony, and you’ve got to think a true pass-for-human android a la Blade Runner may not be all that far away.
Whether you think that’s a good idea may depend on how much you took Terminator to heart.
(By the way, this is also the topic of my newspaper science column this week.) (Photo: J. Movellan et al., UCSD.)
[tags]robots, androids, artificial intelligence[/tags]
Via the indispensable TerraNova blog comes word that no other organ than the New York Times itself is running an article that talks about the Simulation Argument. This exceptionally science-fictional slice of philosophy, created by one Nick Bostrom, contends that the reality we exist within is in fact a simulation of extraordinary complexity, and we are just very cunningly scripted artificial intelligences within it.
What’s interesting is that John Tierney (for the NYT) seems more convinced of Bostrom’s theory than Bostrom himself. It’s a head-twistingly paradoxical piece of thinking, so much so that even George Dvorsky finds it makes his brain hurt – which makes me feel slightly better about being in the same situation.
But my main concern is this – if Bostrom and Tierney are correct, and this really is just a simulation, haven’t they now sent a rather obvious signal to the builders of the simulation that the inmates have seen behind the wizard’s curtain? What if the success of the simulation is dependent on our ignorance of it being one? But then, surely they’d have programmed against that contingency – code is law, after all … but that sounds like the arguments for the ineffability of a deity creating mankind with free will! Good grief … if anyone needs me, I’ll be slumped in the corner surrounded by Greg Egan novels and an empty bottle of gin.
A company called Powerset will be making a new natural language search technology available to the public in September. If the company’s claims are true (as credulously reported in the Technology Review), their search technology will be fundamentally different than the many algorithms that have been used in the past. Instead of developing results based on word and synonym matching, Powerset’s technology teases out the deep linguistic structures embodied in the search queries and in the searched text to make both more accurate and less obvious connections. Points to Powerset CEO Barney Pell for admitting that:
There was not one piece of technology that solved the problem… but instead, it was the unification of many theories and fragments that pulled the project together.
…and that most of the technology was licensed from Xerox PARC. If you’re interested you can sign up for the beta on their website. [kurzweilai]
Despite the snide tone of my title, BBC’s article on the present and future of self-driving cars is an interesting overview of developments since the last running of the Grand Challenge, and a hint at what the world might look like after these cars go mainstream. I wonder if any of the contestants are using evolutionary computing techniques?