Tag Archives: emotion

Cheer up, emo writer – maybe positive sf really could make you more positive.

Well, it turns out my mother may have been right after all* – listening to music with positive messages in the lyrics encourages consideration and empathetic behaviour in teenagers, according to research at the University of Sussex here in the UK. Apparently, people who listen Michael Jackson’s “Heal the World” are more likely to help pick up some knocked-over pencils than those who’ve listened to a neutral or negative tune. [image by Vagamundos]

(I’ve obviously been emotionally mutilated by a lifetime of listening to hirsute and/or black-clad people torturing guitars… if presented with a bunch of pencils in the presence of Michael Jackson songs, my first instinct would be to jam one up each nostril and headbutt the nearest desk until I achieved release.)

But this throws an interesting light on Jetse de Vries’ call for optimistic science fictionif the same psychology pertains to the written word as it does to music, perhaps science fiction readers (and writers) really would be more positive in their outlook if there were more stories written in such a mode.

[ * – This sentence is purely included for stylistic effect; as should be completely obvious, my mother was always right about everything. ]

Mandatory smile assessment

big fake smileDovetailing neatly as it does with yesterday’s mention of machines that can read emotion, I couldn’t resist mentioning this story about Japanese railway staff having their smiles scanned and rated out of 100 every morning before work:

For those with a below-par grin, one of an array of smile-boosting messages will op up on the computer screen ranging from “you still look too serious” to “lift up your mouth corners”, according to the Mainichi Daily News.

[…]

Workers at Keihin Electric Express Railway will receive a print out of their daily smile which they will be expected to keep with then throughout the day to inspire them to smile at all times, the report added.

Nothing looks more awkward – or alarming – than a forced smile. [via SlashDot; image by cutglassdecanter]

Learning to love (or hate) emotional machines

Ninety percent of human communication is non-verbal, so the old cliche goes – and as such computer science types are constantly looking for new ways to widen the bandwidth between ourselves and our machines. Currently making a comeback is the notion of computers that can sense a human’s emotional state and act on it accordingly.

Outside of science fiction, the idea of technology that reads emotions has a brief, and chequered, past. Back in the mid-1990s, computer scientist Rosalind Picard at the Massachusetts Institute of Technology suggested pursuing this sort of research. She was greeted with scepticism. “It was such a taboo topic back then – it was seen as very undesirable, soft and irrelevant,” she says.

Picard persevered, and in 1997 published a book called Affective Computing, which laid out the case that many technologies would work better if they were aware of their user’s feelings. For instance, a computerised tutor could slow down its pace or give helpful suggestions if it sensed a student looking frustrated, just as a human teacher would.

Naturally, there’s a raft-load of potential downsides, too:

“The nightmare scenario is that the Microsoft paperclip starts to be associated with anything from the force with which you’re typing to some sort of physiological measurement,” says Gaver. “Then it pops up on your screen and says: ‘Oh I’m sorry you’re unhappy, would you like me to help you with that?'”

I think I’m safe in saying no one wants to be be shrunk by Clippy.

Emotion sensors could undermine personal relationships, he adds. Monitors that track elderly people in their homes, for instance, could leave them isolated. “Imagine being in a hurry to get home and wondering whether to visit an older friend on the way,” says Gaver. “Wouldn’t this be less likely if you had a device to reassure you not only that they were active and safe, but showing all the physiological and expressive signs of happiness as well?”

That could be an issue, but it’s not really the technology’s fault if people choose to over-rely on it. This is more worrying, though:

Picard raises another concern – that emotion-sensing technologies might be used covertly. Security services could use face and posture-reading systems to sense stress in people from a distance (a common indicator a person may be lying), even when they’re unaware of it. Imagine if an unsavoury regime got hold of such technology and used it to identify citizens who opposed it, says Picard.

That’s not really much of an imaginatory stretch, at least not here in the CCTV-saturated UK. But the same research that enables emotional profiling will doubtless reveal ways to confuse or defeat it; perhaps some sorts of meditation exercises could help control your physiology? Imagine the tools and techniques of the advanced con-man turned into survival skills for political dissidents…

Software and sentiments – language as battlefield

I consider myself pretty fortunate in that I don’t have to moderate the comments here at Futurismic with a heavy hand[1], but that’s down to matters of scale; there just aren’t enough active commenters here to allow severe flamewars to start, but moderating the discussion on a site like BoingBoing is a different matter entirely, and usually requires a layer of direct human interaction after thecommon-or-garden \/1/\9|2/\ spambots have been weeded out.

Those days may be nearing an end, however; New Scientist reports on a new breed of software agent that is programmed to analyse the tone and sentiment of written communication on the web:

The early adopters of these tools are the owners of big brand names in a world where company reputations are affected by customer blogs as much as advertising campaigns. A small but growing group of firms is developing tools that can trawl blogs and online comments, gauging the emotional responses brought about by the company or its products.

[…]

The abusive “flame wars” that plague online discussions are encouraged by the way human psychology plays out over the web, as we’ve explained before. Moderating such discussions can be a time-consuming job, needing much judgment to spot when a heated exchange crosses over into abuse.

Sentiment-aware software can help here too. One example is Adaptive Semantics’ JuLiA – a software agent based on a learning algorithm that has been trained to recognise abusive comments. “She” can take down or quarantine comments that cross a predetermined abuse threshold […]

Work is underway to expand JuLiA’s comprehension abilities – for example, to decide whether text is intelligent, sarcastic, or political in tone.

That’s all well and good, and it’ll probably work for a while – but much like anything else, it’ll be seen as a challenge to exactly the sort of people it’s designed to filter, and we’ll have another software arms race on our hands – albeit one initially played for much lower stakes than the virus/anti-virus game.

But look here a moment:

Another firm, Lexalytics, uses sentiment analysis to influence what people say before it is too late. It can identify which “good news” messages from company executives have the greatest effect on stock price. These results can then be used to advise certain people to speak out more, or less, often, or to gauge the likely effectiveness of a planned release.

Now there’s a double-edged sword; if you can use that analysis to protect and strengthen a stock price, someone can surely use it for exactly the opposite. And even beyond the battlefields of the trading floors and corporate boardrooms, there are plenty of folk who could find a use for software that could advise them on how to make their communications less offensive or incendiary… or more so, if the situation demanded it.

We live in the communication age, so I guess it’s inevitable that communication should become another new frontier for warfare… but look at the bright side: slam poetry contests are going to become a lot more interesting for spectators and participants alike. 😉

[ 1 – That’s not a challenge or a complaint, OK? Thanks. 🙂 ]

Researchers Identify Fear Enzyme

Researchers working out of MIT’s Picower Institute for Learning and Memory have identified an enzyme, Cdk5, that can inhibit in rats to prevent learned fear responses. The research has practical applications in the areas of phobia and post traumatic stress treatment. This is just the latest in a series of research in the neurosciences that are leading to a near-complete mastery of how we feel and even what we think. A future is possible in which our descendants will look back at us in amazement that we ever felt an emotion that we didn’t wish to feel.