Tag Archives: technology

Learning to love (or hate) emotional machines

Ninety percent of human communication is non-verbal, so the old cliche goes – and as such computer science types are constantly looking for new ways to widen the bandwidth between ourselves and our machines. Currently making a comeback is the notion of computers that can sense a human’s emotional state and act on it accordingly.

Outside of science fiction, the idea of technology that reads emotions has a brief, and chequered, past. Back in the mid-1990s, computer scientist Rosalind Picard at the Massachusetts Institute of Technology suggested pursuing this sort of research. She was greeted with scepticism. “It was such a taboo topic back then – it was seen as very undesirable, soft and irrelevant,” she says.

Picard persevered, and in 1997 published a book called Affective Computing, which laid out the case that many technologies would work better if they were aware of their user’s feelings. For instance, a computerised tutor could slow down its pace or give helpful suggestions if it sensed a student looking frustrated, just as a human teacher would.

Naturally, there’s a raft-load of potential downsides, too:

“The nightmare scenario is that the Microsoft paperclip starts to be associated with anything from the force with which you’re typing to some sort of physiological measurement,” says Gaver. “Then it pops up on your screen and says: ‘Oh I’m sorry you’re unhappy, would you like me to help you with that?'”

I think I’m safe in saying no one wants to be be shrunk by Clippy.

Emotion sensors could undermine personal relationships, he adds. Monitors that track elderly people in their homes, for instance, could leave them isolated. “Imagine being in a hurry to get home and wondering whether to visit an older friend on the way,” says Gaver. “Wouldn’t this be less likely if you had a device to reassure you not only that they were active and safe, but showing all the physiological and expressive signs of happiness as well?”

That could be an issue, but it’s not really the technology’s fault if people choose to over-rely on it. This is more worrying, though:

Picard raises another concern – that emotion-sensing technologies might be used covertly. Security services could use face and posture-reading systems to sense stress in people from a distance (a common indicator a person may be lying), even when they’re unaware of it. Imagine if an unsavoury regime got hold of such technology and used it to identify citizens who opposed it, says Picard.

That’s not really much of an imaginatory stretch, at least not here in the CCTV-saturated UK. But the same research that enables emotional profiling will doubtless reveal ways to confuse or defeat it; perhaps some sorts of meditation exercises could help control your physiology? Imagine the tools and techniques of the advanced con-man turned into survival skills for political dissidents…

DARPA <3 Dune: miniature ornithopter in development

Chalk yet another one up to Frank Herbert; the DARPA people have just awarded a Phase II contract extension (whatever that means) to a company called AeroVironment so that they can continue developing their ‘Mercury’ Nano Air Vehicle ornithopter prototype. [via Hack-A-Day]

Ornithopters – which feature heavily in the Dune series – are aircraft that are propelled by flapping their wings like a bird rather than using rotors, propellors or jets. Check out the Mercury prototype in action:

A hashtag for genocide: Twitter, the Iran elections and the moral ambivalence of social media

We raised this subject in the wake of the Georgia revolution, but it’s worth bringing up again. In the light Twitter’s starring role in the current election protests in Iran, there’s much talk of the power of social media as a catalyst and enabler for social change, but as Jamais Cascio points out, the morality of a tool depends on the people wielding it… and it’s not hard to imagine it being put to much darker uses, much as other media have been before.

Not because I have any sympathy for Iran’s government, I should hasten to say, or because I see any threat coming from this particular use of Twitter. It scares me because of how close it aligns with something I noted in my talk at Mobile Monday in Amsterdam earlier this month, an observation that happened almost by accident.

In noting the potential power of social networking tools for organizing mass change, I thought out loud for a moment about what kinds of dangers might emerge. It struck me, as I spoke, that there is a terrible analogy that might be applicable: the use of radio as a way of coordinating bloody attacks on rival ethnic communities during the Rwandan genocide in the early 1990s. I asked, out loud, whether Twitter could ever be used to trigger a genocide. The audience was understandably stunned by the question, and after a few seconds someone shouted, “No!” I could only hope that the anonymous reply was right, but I don’t think he was.

Certainly a point worth considering; no doubt there’ll be a backlash – against Twitter, or whatever the latest flavour-of-the-moment equivalent is at the time – once more people start asking the same questions as Cascio has. It should be a self-evident truth, but we need to remember that technology alone won’t make the world a better place; it’s up to us to use it in the right ways.