Tag Archives: expert systems

You can write your own connective gag for these two links

Via Hack-A-Day, the oddballs at Backyard Brains demonstrate a prototype technoexoskeletal assembly for the remote control of insect pests on the move. Shorter version: RoboRoach!


And via Kyle Munkittrick, (software) RoboLawyers:

The most basic linguistic approach uses specific search words to find and sort relevant documents. More advanced programs filter documents through a large web of word and phrase definitions. A user who types “dog” will also find documents that mention “man’s best friend” and even the notion of a “walk.”

The sociological approach adds an inferential layer of analysis, mimicking the deductive powers of a human Sherlock Holmes. Engineers and linguists at Cataphora, an information-sifting company based in Silicon Valley, have their software mine documents for the activities and interactions of people — who did what when, and who talks to whom. The software seeks to visualize chains of events. It identifies discussions that might have taken place across e-mail, instant messages and telephone calls.

Then the computer pounces, so to speak, capturing “digital anomalies” that white-collar criminals often create in trying to hide their activities.

For example, it finds “call me” moments — those incidents when an employee decides to hide a particular action by having a private conversation. This usually involves switching media, perhaps from an e-mail conversation to instant messaging, telephone or even a face-to-face encounter.

I should probably stop being so publicly disparaging about the legal industries, really, lest these expert systems crawl all my online witterings and decide to set me up for a fall…

What Watson did next

Impressed by Watson’s Jeopardy! victory? Found yourself with the urge to build your own (scaled down) supercomputer artificial intelligence in your basement using nothing but off-the-shelf hardware and open-source software? IBM’s very own Tony Pearson has got your back. [via MetaFilter; please bear in mind that not all basements will be eminently suited to a research project of this scale]

Meanwhile, fresh from whuppin’ on us slow-brained meatbags, Watson’s seeking new challenges in the world of medicine [via BigThink]:

The idea is for Watson to digest huge quantities of medical information and deliver useful real-time information to physicians, perhaps eventually in response to voice questions. If successful, the system could help medical experts diagnose conditions or create a treatment plan.

… while other health-care technology can work with huge pools of data, Watson is the first system capable of usefully harnessing the vast amounts of medical information that exists in the form of natural language text—medical papers, records, and notes. Nuance hopes to roll out the first commercial system based on Watson technology within two years, although it has not said how sophisticated this system will be.

Ah, good old IBM. My father used to work for them back in the seventies and early eighties, and it’s kind of amusing to see that their age-old engineering approach of building an epic tool before looking for a use to put it to hasn’t changed a bit…

Computerising the music critics

Keeping with today’s vague (and completely unplanned) theme of critical assessments of cultural product, here’s a piece at New Scientist that looks at attempts to create a kind of expert system for music criticism and taxonomy. Well, OK – they’re actually trying to build recommendation engines, but in The Future that’s all a meatbag music critic/curator will really be, AMIRITE*?

So, there’s the melody analysis approach:

Barrington is building software that can analyse a piece of music and distil information about it that may be useful for software trying to compile a playlist. With this information, the software can assign the music a genre or even give it descriptions which may appear more subjective, such as whether or not a track is “funky”, he says.

Before any software can recommend music in this way, it needs to be capable of understanding what distinguishes one genre of music from another. Early approaches to this problem used tricks employed in speech recognition technology. One of these is the so-called mel-frequency cepstral coefficients (MFCC) approach, which breaks down audio into short chunks, then uses an algorithm known as a fast Fourier transform to represent each chunk as a sum of sine waves of different frequency and amplitude.

And then the rhythm analysis approach (which, not entirely surprisingly, comes from a Brazilian university):

Unlike melody, rhythm is potentially a useful way for computers to find a song’s genre, da F. Costa says, because it is simple to extract and is independent of instruments or vocals. Previous efforts to analyse rhythm tended to focus on the duration of notes, such as quarter or eighth-notes (crotchets or quavers), and would look for groups and patterns that were characteristic of a given style. Da F. Costa reasoned that musical style might be better pinpointed by focusing on the probability of pairs of notes of given durations occurring together. For example, one style of music might favour a quarter note being followed by another quarter note, while another genre would favour a quarter note being succeeded by an eighth note.

But there’s a problem with this taxonomy-by-analysis approach:

Barrington, however, believes that assigning genres to entire tracks suffers from what he calls the Bohemian Rhapsody problem, after the 1975 song by Queen which progresses from mellow piano introduction to blistering guitar solo to cod operetta. “For some songs it just doesn’t make sense to say ‘this is a rock song’ or ‘this is a pop song’,” he says.

(Now, doesn’t that remind you of the endless debates over whether a book is science fiction or not? A piece of music can partake of ‘rockness’ and ‘popness’ at the same time, and in varying degrees; I’ve long argued that ‘science fiction’ is an aesthetic which can partaken of by a book, rather than a condition that a book either has or doesn’t have, but it’s not an argument that has made a great deal of impact.)

This analyses of music are a fascinating intellectual exercise, certainly, but I’m not sure that these methods are ever going to be any more successful at taxonomy and recommendation than user-contributed rating and tagging systems… and they’ll certainly never be as efficient in terms of resources expended. And they’ll never be able to assess that most nebulous and subjective of properties, quality

… or will they?

[ * Having just typed this rather flippantly, I am by no means certain that the future role of the critic/curator will be primarily one of recommendation. Will the open playing field offer more opportunity for in-depth criticism that people actually read and engage with for its own sake, or will it devolve into a Klausner-hive of “if you like (X), you’re gonna love (Y)”? ]