Keeping with today’s vague (and completely unplanned) theme of critical assessments of cultural product, here’s a piece at New Scientist that looks at attempts to create a kind of expert system for music criticism and taxonomy. Well, OK – they’re actually trying to build recommendation engines, but in The Future that’s all a meatbag music critic/curator will really be, AMIRITE*?
So, there’s the melody analysis approach:
Barrington is building software that can analyse a piece of music and distil information about it that may be useful for software trying to compile a playlist. With this information, the software can assign the music a genre or even give it descriptions which may appear more subjective, such as whether or not a track is “funky”, he says.
Before any software can recommend music in this way, it needs to be capable of understanding what distinguishes one genre of music from another. Early approaches to this problem used tricks employed in speech recognition technology. One of these is the so-called mel-frequency cepstral coefficients (MFCC) approach, which breaks down audio into short chunks, then uses an algorithm known as a fast Fourier transform to represent each chunk as a sum of sine waves of different frequency and amplitude.
And then the rhythm analysis approach (which, not entirely surprisingly, comes from a Brazilian university):
Unlike melody, rhythm is potentially a useful way for computers to find a song’s genre, da F. Costa says, because it is simple to extract and is independent of instruments or vocals. Previous efforts to analyse rhythm tended to focus on the duration of notes, such as quarter or eighth-notes (crotchets or quavers), and would look for groups and patterns that were characteristic of a given style. Da F. Costa reasoned that musical style might be better pinpointed by focusing on the probability of pairs of notes of given durations occurring together. For example, one style of music might favour a quarter note being followed by another quarter note, while another genre would favour a quarter note being succeeded by an eighth note.
But there’s a problem with this taxonomy-by-analysis approach:
Barrington, however, believes that assigning genres to entire tracks suffers from what he calls the Bohemian Rhapsody problem, after the 1975 song by Queen which progresses from mellow piano introduction to blistering guitar solo to cod operetta. “For some songs it just doesn’t make sense to say ‘this is a rock song’ or ‘this is a pop song’,” he says.
(Now, doesn’t that remind you of the endless debates over whether a book is science fiction or not? A piece of music can partake of ‘rockness’ and ‘popness’ at the same time, and in varying degrees; I’ve long argued that ‘science fiction’ is an aesthetic which can partaken of by a book, rather than a condition that a book either has or doesn’t have, but it’s not an argument that has made a great deal of impact.)
This analyses of music are a fascinating intellectual exercise, certainly, but I’m not sure that these methods are ever going to be any more successful at taxonomy and recommendation than user-contributed rating and tagging systems… and they’ll certainly never be as efficient in terms of resources expended. And they’ll never be able to assess that most nebulous and subjective of properties, quality…
… or will they?
[ * Having just typed this rather flippantly, I am by no means certain that the future role of the critic/curator will be primarily one of recommendation. Will the open playing field offer more opportunity for in-depth criticism that people actually read and engage with for its own sake, or will it devolve into a Klausner-hive of “if you like (X), you’re gonna love (Y)”? ]