Learning to love (or hate) emotional machines

Ninety percent of human communication is non-verbal, so the old cliche goes – and as such computer science types are constantly looking for new ways to widen the bandwidth between ourselves and our machines. Currently making a comeback is the notion of computers that can sense a human’s emotional state and act on it accordingly.

Outside of science fiction, the idea of technology that reads emotions has a brief, and chequered, past. Back in the mid-1990s, computer scientist Rosalind Picard at the Massachusetts Institute of Technology suggested pursuing this sort of research. She was greeted with scepticism. “It was such a taboo topic back then – it was seen as very undesirable, soft and irrelevant,” she says.

Picard persevered, and in 1997 published a book called Affective Computing, which laid out the case that many technologies would work better if they were aware of their user’s feelings. For instance, a computerised tutor could slow down its pace or give helpful suggestions if it sensed a student looking frustrated, just as a human teacher would.

Naturally, there’s a raft-load of potential downsides, too:

“The nightmare scenario is that the Microsoft paperclip starts to be associated with anything from the force with which you’re typing to some sort of physiological measurement,” says Gaver. “Then it pops up on your screen and says: ‘Oh I’m sorry you’re unhappy, would you like me to help you with that?'”

I think I’m safe in saying no one wants to be be shrunk by Clippy.

Emotion sensors could undermine personal relationships, he adds. Monitors that track elderly people in their homes, for instance, could leave them isolated. “Imagine being in a hurry to get home and wondering whether to visit an older friend on the way,” says Gaver. “Wouldn’t this be less likely if you had a device to reassure you not only that they were active and safe, but showing all the physiological and expressive signs of happiness as well?”

That could be an issue, but it’s not really the technology’s fault if people choose to over-rely on it. This is more worrying, though:

Picard raises another concern – that emotion-sensing technologies might be used covertly. Security services could use face and posture-reading systems to sense stress in people from a distance (a common indicator a person may be lying), even when they’re unaware of it. Imagine if an unsavoury regime got hold of such technology and used it to identify citizens who opposed it, says Picard.

That’s not really much of an imaginatory stretch, at least not here in the CCTV-saturated UK. But the same research that enables emotional profiling will doubtless reveal ways to confuse or defeat it; perhaps some sorts of meditation exercises could help control your physiology? Imagine the tools and techniques of the advanced con-man turned into survival skills for political dissidents…