Tag Archives: learning

Little lost robot

Robots have been mobile for decades, but they’ve only ever been able to go places for which they had a map or set of directions stored. That’s all changed thanks to a team of roboticists from Munich, who’ve built the first robot that can be unleashed into unfamiliar territory without a map. How does it complete its journey? It asks for directions, of course:

ACE uses cameras and software to detect humans nearby, based on their motion and upright posture. As it closes in on a likely helper, ACE’s “head” – bearing a touchscreen and a second screen displaying an animated mouth – turns to face the chosen person.

A speaker working in sync with the animated mouth is used to get the person’s attention and to ask them to touch the screen if they want to help. Willing guides are then asked to point the robot in the correct direction, with the response being analysed by posture recognition software. Direction set, ACE says “thank you” before trundling off.

Pointing, rather than telling the robot where to go, avoids confusion caused by the fact that the robot and the facing pedestrian each have a different sense of left and right.

Although it interacted with 38 people over a period of nearly five hours – ACE did eventually reach its destination. In fact, the team report that the robot was making very good progress until it reached a busy pedestrian area where its own popularity became a problem.

The current rarity of mobile robots in public spaces is obviously a big factor here; in a few more decades, we may barge past lost robots on the pavement as quickly and guiltily as we do homeless people or street-drunks.

The principle on display here is that of robot-human interaction in order to gather environmental data to complete a task or journey, which is all well and good, but it’s a proof-of-concept more than anything else. If all you needed was a robot that could navigate an unfamiliar cityscape, it’d be far easier to kit it out with good visual sensors and a GPS unit.

Hell knows this would be useless for military applications; if your super-killbot had to stop at every enemy checkpoint to ask the way to headquarters, I dare say the best place it would end up would be a long long way from anything at all… [story via regular commenter Evil Rocks; apologies to Paul McAuley for the headline]

2020 – Varsity’s end?

empty university lecture hallUnless they start to adapt quickly, colleges and universities could become irrelevant in little more than a decade. So claims Professor David Wiley, at any rate, using arguments that should be familiar to die-hard internet denizens and futurists:

America’s colleges and universities, says Wiley, have been acting as if what they offer — access to educational materials, a venue for socializing, the awarding of a credential — can’t be obtained anywhere else. By and large, campus-based universities haven’t been innovative, he says, because they’ve been a monopoly.

But Google, Facebook, free online access to university lectures, after-hours institutions such as the University of Phoenix, and virtual institutions such as Western Governors University have changed that. Many of today’s students, he says, aren’t satisfied with the old model that expects them to go to a lecture hall at a prescribed time and sit still while a professor talks for an hour.

Higher education doesn’t reflect the life that students are living, he says. In that life, information is available on demand, files are shared, and the world is mobile and connected. Today’s colleges, on the other hand, are typically “tethered, isolated, generic, and closed,” he says.

It’s the “open everything” argument, of course, but it’s given a certain extra weight in this instance because Wiley lectures at Brigham Young University, a small private university owned by the Mormon church; if they can see the writing on the wall and admit to it, then change is definitely afoot (although Wiley makes the point that establishments like Brigham Young offer “a religious education and the chance to meet and marry an LDS Church member”, which is effectively a kind of social network attraction, albeit a non-technological one). [via Technovelgy; image by Shaylor]

I’d go a few steps further, though. Wiley suggests that “universities would still make money, though, because they have a marketable commodity: to get college credits and a diploma, you’d have to be a paying customer.” I’m not sure how things stand in the US, but here in the UK we have a saturation of graduates with qualifications that are either oversupplied or effectively irrelevant to obtaining a job (in parallel with a decline in the number of science and engineering graduates); as further education has become much more expensive (as a result of the government’s efforts to make it available to all, ironically) its final product has become devalued. What most employers want now is experience and demonstrable ability – two things that a diploma does not guarantee in any way.

So perhaps we’ll see a return to something like the old guild apprenticeship system, wherein people work for a company at the same time as they take an assortment of modular courses with direct relevance to the job in question, moving up the ranks as they gain – and demonstrate – the specialist knowledge and skills required, at the pace which best suits them. There’d be nothing to prevent someone learning beyond their discipline if they so chose, or spending a lifetime in pursuit of academic achievement.

In fact, the more I think about it, the more I’m put in mind of “Phaedrus’s university” as described in Zen and the Art of Motorcycle Maintenance (yes, I do have a hippie streak, as if you hadn’t guessed), the most important component of which is the way it decouples education from coercion, obligation and standardised achievement metrics. Pirsig’s ideas were considered pretty radical in their time, and largely dismissed as unworkable; in the light of the ever-growing ubiquity of the web and free content, maybe it’s time to take another look.

Quantum cognition: spooky action in word recall

fractal_networkA fascinating article here at Physorg on how human beings remember and recall words. Researchers at Queensland University of Technology and the University of South Florida compare two ways of thinking about connections between similar words 1) Networks of similar words and 2) something analogous to spooky action at a distance:

…the researchers suggest that the probability of a word being activated in memory lies somewhere between Spreading Activation (in which words are individually recalled based on individually calculated conceptual distance) and Spooky Activation at a Distance (in which the cue word simultaneously activates the entire associative structure).

Most likely, Spreading Activation underestimates the strength of activation, while Spooky Activation at a Distance overestimates the strength of activation.

The researchers are using quantum physics as an preexisting abstract framework for their mathematical models for how human beings remember:

In the new model, associative word recall probability depends on how strongly connected the associated words are to each other.

For instance, “Earth” and “space” are entangled in the context of “planet,” but “Earth” and “gas giant” may not be entangled (though “Jupiter” and “gas giant” may be).

Words that are entangled with many other words have a greater probability of being recalled, while words that are entangled with few or no other words have a smaller recall probability.

At this stage this is theoretical, but the long-term consideration is for the development of AI and similar technologies:

As our information environment becomes more complex, we will need technology that can draw context-sensitive associations like the ones we would draw, but increasingly don’t as we lack the cognitive resources to do so.

Therefore, such the ‘meanings’ processed by such technology should be motivated from a socio-cognitive perspective.” This kind of research is an example of an emerging field called “quantum cognition,” the aim of which is to use quantum theory to develop radically new models of a variety of cognitive phenomena ranging from human memory to decision making.

Plenty of beef for the science-fictional burger bar.

[image from zeroinfluencer on flickr]


Terminator statueTo brighten your Monday morning, here’s some speculation on robot morality – though not one of the usual sources. Nick Carr bounces off a Times Online story about a report from the US Office of Naval Research which “strongly warns the US military against complacency or shortcuts as military robot designers engage in the ‘rush to market’ and the pace of advances in artificial intelligence is increased.”

Carr digs into the text of the report itself [pdf], which demonstrates a caution somewhat at odds with the usual media image of the military-industrial complex:

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The report goes on to consider potential training methods, and suggests that some sort of ‘moral programming’ might be the only way to ensure that our artificial warriors don’t run amok when exposed to the unpredictable scenario of a real conflict. Perhaps Carr is a science fiction reader, because he’s thinking beyond the obvious answers:

Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

It’s a tricky question; essentially the military want to have their cake and eat it, replacing fallible meat-soldiers with reliable mechanical replacements that can do all the clever stuff without any of the attendant emotional trickiness that the ability to do clever stuff includes as part of the bargain. [image by Dinora Lujan]

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars – in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?

The slaying of a beautiful hypothesis by an ugly fact

ducksIt’s always irritated me when the media runs a story about how standards in science education are falling and illustrate this by asking members of the Great British public a bunch of science-based trivia questions.

My beef with this habit is that science isn’t just about facts. It’s about the scientific method. It’s about a way of looking at and thinking about the world. It’s about empiricism, logic, rationality, trial and error, and being aware of your own limitations and biases.

Facts are fine, but it’s a mistake for anyone to identify science purely with fact-based knowledge.

This particular bugbear of mine has found some support with this study, which concludes (among other things) the importance of developing scientific reasoning skills alongside scientific knowledge:

Researchers tested nearly 6,000 students majoring in science and engineering at seven universities — four in the United States and three in China. Chinese students greatly outperformed American students on factual knowledge of physics — averaging 90 percent on one test, versus the American students’ 50 percent, for example.

But in a test of science reasoning, both groups averaged around 75 percent — not a very high score, especially for students hoping to major in science or engineering.

FWIW I think inquiry-based learning should become it’s own subject in the same way physics, chemistry, and maths already are.

And since so many of the problems the world faces are interpreted through the prism of scientific thought it would be a Good Thing if the true nature of science were more generally understood.


[from Physorg][image from Gaetan Lee on flickr] [Also what does this have to do with SF? Who can say! Peace.][30/01/2009: Small edit – adding BBC News link to science video quiz]