I consider myself pretty fortunate in that I don’t have to moderate the comments here at Futurismic with a heavy hand, but that’s down to matters of scale; there just aren’t enough active commenters here to allow severe flamewars to start, but moderating the discussion on a site like BoingBoing is a different matter entirely, and usually requires a layer of direct human interaction after thecommon-or-garden \/1/\9|2/\ spambots have been weeded out.
Those days may be nearing an end, however; New Scientist reports on a new breed of software agent that is programmed to analyse the tone and sentiment of written communication on the web:
The early adopters of these tools are the owners of big brand names in a world where company reputations are affected by customer blogs as much as advertising campaigns. A small but growing group of firms is developing tools that can trawl blogs and online comments, gauging the emotional responses brought about by the company or its products.
The abusive “flame wars” that plague online discussions are encouraged by the way human psychology plays out over the web, as we’ve explained before. Moderating such discussions can be a time-consuming job, needing much judgment to spot when a heated exchange crosses over into abuse.
Sentiment-aware software can help here too. One example is Adaptive Semantics’ JuLiA – a software agent based on a learning algorithm that has been trained to recognise abusive comments. “She” can take down or quarantine comments that cross a predetermined abuse threshold […]
Work is underway to expand JuLiA’s comprehension abilities – for example, to decide whether text is intelligent, sarcastic, or political in tone.
That’s all well and good, and it’ll probably work for a while – but much like anything else, it’ll be seen as a challenge to exactly the sort of people it’s designed to filter, and we’ll have another software arms race on our hands – albeit one initially played for much lower stakes than the virus/anti-virus game.
But look here a moment:
Another firm, Lexalytics, uses sentiment analysis to influence what people say before it is too late. It can identify which “good news” messages from company executives have the greatest effect on stock price. These results can then be used to advise certain people to speak out more, or less, often, or to gauge the likely effectiveness of a planned release.
Now there’s a double-edged sword; if you can use that analysis to protect and strengthen a stock price, someone can surely use it for exactly the opposite. And even beyond the battlefields of the trading floors and corporate boardrooms, there are plenty of folk who could find a use for software that could advise them on how to make their communications less offensive or incendiary… or more so, if the situation demanded it.
We live in the communication age, so I guess it’s inevitable that communication should become another new frontier for warfare… but look at the bright side: slam poetry contests are going to become a lot more interesting for spectators and participants alike. 😉
[ 1 – That’s not a challenge or a complaint, OK? Thanks. 🙂 ]
2 thoughts on “Software and sentiments – language as battlefield”
This is THE new area targeted for AI (artificial intelligent) development.
You beat me to it.
Comments are closed.