Wearable computing: the state of the art

Paul Raven @ 02-08-2010

Martin Magnusson got bored of waiting for the cyberpunk future we were promised in the mid-eighties, so he decided to build his own wearable cyberdeck rig. The version pictured [ganked from this Wired article] is a little crude, perhaps (I quite like the did-it-myself workbench aesthetic of exposed cables, personally, though it’d be a nightmare in a combat situation)), but he’s also managed to scrunch the bulk of it down into a little CD-case-shoulder-bag number for the more style-conscious geek-about-town.

Martin Magnusson's wearable computer

In case you’re wondering about battery life (which was my first question), Magnusson reckons he gets three hours of juice from four AA batteries, which is better than I’d have expected, though still not too awesome. Time to look at harvesting waste energy from the body, Mister Magnusson? 🙂


Cheaper, more open tablets: this is exactly why I had no interest in buying an iPad

Paul Raven @ 13-07-2010

No, I’m not about to start bitching about Apple’s flagship gizmo and what it can or can’t do (although, if you want to buy me a beer or two in meatspace, I’d be more than happy to give you my two uninformed but moderately passionate cents on that).

Instead, I’m just going to point to evidence of exactly what I’ve been saying would happen: that within a very short amount of time after the iPad’s launch, you’d be able to get cheaper hardware with the same or greater functionality, and run a FOSS operating system on it that lets you get applications from anywhere you choose. So, via eBooknewser, here’s a guide to hacking the US$200 Pandigital Novel tablet device so it’ll run the Android operating system. Come Christmas time this year, there’ll be dozens of machines just like that kicking around all over the place, only cheaper still.

Speaking of Android, there’s a lot of noise about the way that Google are working on a kind of visual development system that’s designed to let folk with minimal coding knowledge to develop apps that will run on Android – again, a stark comparison to the walled-garden quality control of Apple’s development kits. Sure, the Android market will be flooded with crap and/or dodgy apps as a result… but letting the good stuff bubble to the top is what user rating systems and [editors/curators/gatekeepers] are for, right?


The Processor Wars

Paul Raven @ 12-07-2010

There are many ways to make a profit; one of them is to make a better product than the competition, but sometimes that alone is not enough, especially when you make the components of complex devices like computers. So maybe you could think about building loopholes into your product that make the competition’s product look inferior when used in the same system? There are suggestions that’s what nVidia has been doing:

PhysX is designed to make it easy for developers to add high-quality physics simulation to their games, so that cloth drapes the way it should, balls bounce realistically, and smoke and fragments (mostly from exploding barrels) fly apart in a lifelike manner. In recognition of the fact that game developers, by and large, don’t bother to release PC-only titles anymore, NVIDIA also wisely ported PhysX to the leading game consoles, where it runs quite well on console hardware.

If there’s no NVIDIA GPU in a gamer’s system, PhysX will default to running on the CPU, but it doesn’t run very well there. You might think that the CPU’s performance deficit is due simply to the fact that GPUs are far superior at physics emulation, and that the CPU’s poor showing on PhysX is just more evidence that the GPU is really the component best-equipped to give gamers realism.

Some early investigations into PhysX performance showed that the library uses only a single thread when it runs on a CPU. This is a shocker for two reasons. First, the workload is highly parallelizable, so there’s no technical reason for it not to use as many threads as possible; and second, it uses hundreds of threads when it runs on an NVIDIA GPU. So the fact that it runs single-threaded on the CPU is evidence of neglect on NVIDIA’s part at the very least, and possibly malign neglect at that.

Whether it is malign remains to be seen (the use of Occam’s Razor may well apply here, but then again it may not), but this is still an interesting development: in a world where most new inventions are part of larger systems, the battle for sales isn’t simply a matter of making your own product better. Granted, talking down the value of a competitor’s product has been a core strategy of public relations for years, but actually attenuating that value in deployment strikes me as being something pretty new, if only because it wasn’t really possible before. Unless anyone can suggest a situation where this has happened before?


Microsoft Kinect: The Call of the Womb

Jonathan McCalmont @ 30-06-2010

Blasphemous Geometries by Jonathan McCalmont

###

I have never been to the festival of hubris and chest-thumping that is the American video games industry’s yearly trade-fair E3 (a.k.a. ‘E Cubed’, a.k.a. ‘Electronic Entertainment Expo’), but the mere thought of it makes me feel somewhat ill. A friend of mine once attended a video game trade fair in Japan. He returned not with talk of games, but of the dozens of overweight middle-aged men who practically came to blows as they jostled for the best angle from which to take up-skirt photographs of the models manning the various booths.

As disturbing and sleazy as this might well sound, it still manages to cast Japanese trade shows in a considerably better light than a lot of the coverage that came out of E3. Every so often, an event or an article will prompt the collection of sick-souled outcasts known as ‘video game journalists’ into a fit of ethical navel-gazing: are their reviews too soft? are their editorial processes too open to commercial pressures? do they allow their fannishness to override their professional integrity? Oddly enough, these periodic bouts of hand-wringing never coincide with E3.

E3 is a principles-free zone as far as video game reporting is concerned: Journalists travel from all over the world to sit in huge conference halls where they are patronised to within an inch of their wretched lives by people from the PR departments of Nintendo, Microsoft and Sony. At a time when cynicism and critical thinking might allow a decent writer to cut through the bullshit and provide some insights into the direction the industry is taking, most games writers choose instead to recycle press releases and gush about games that are usually indistinguishable from the disappointing batch of warmed-over ideas dished out the previous year. At least the creepy Japanese guys had an excuse for wandering around a trade fair doused in sweat and sporting huge hard-ons.

Microsoft Kinect with Xbox 360

Continue reading “Microsoft Kinect: The Call of the Womb”


Computerising the music critics

Paul Raven @ 15-06-2010

Keeping with today’s vague (and completely unplanned) theme of critical assessments of cultural product, here’s a piece at New Scientist that looks at attempts to create a kind of expert system for music criticism and taxonomy. Well, OK – they’re actually trying to build recommendation engines, but in The Future that’s all a meatbag music critic/curator will really be, AMIRITE*?

So, there’s the melody analysis approach:

Barrington is building software that can analyse a piece of music and distil information about it that may be useful for software trying to compile a playlist. With this information, the software can assign the music a genre or even give it descriptions which may appear more subjective, such as whether or not a track is “funky”, he says.

Before any software can recommend music in this way, it needs to be capable of understanding what distinguishes one genre of music from another. Early approaches to this problem used tricks employed in speech recognition technology. One of these is the so-called mel-frequency cepstral coefficients (MFCC) approach, which breaks down audio into short chunks, then uses an algorithm known as a fast Fourier transform to represent each chunk as a sum of sine waves of different frequency and amplitude.

And then the rhythm analysis approach (which, not entirely surprisingly, comes from a Brazilian university):

Unlike melody, rhythm is potentially a useful way for computers to find a song’s genre, da F. Costa says, because it is simple to extract and is independent of instruments or vocals. Previous efforts to analyse rhythm tended to focus on the duration of notes, such as quarter or eighth-notes (crotchets or quavers), and would look for groups and patterns that were characteristic of a given style. Da F. Costa reasoned that musical style might be better pinpointed by focusing on the probability of pairs of notes of given durations occurring together. For example, one style of music might favour a quarter note being followed by another quarter note, while another genre would favour a quarter note being succeeded by an eighth note.

But there’s a problem with this taxonomy-by-analysis approach:

Barrington, however, believes that assigning genres to entire tracks suffers from what he calls the Bohemian Rhapsody problem, after the 1975 song by Queen which progresses from mellow piano introduction to blistering guitar solo to cod operetta. “For some songs it just doesn’t make sense to say ‘this is a rock song’ or ‘this is a pop song’,” he says.

(Now, doesn’t that remind you of the endless debates over whether a book is science fiction or not? A piece of music can partake of ‘rockness’ and ‘popness’ at the same time, and in varying degrees; I’ve long argued that ‘science fiction’ is an aesthetic which can partaken of by a book, rather than a condition that a book either has or doesn’t have, but it’s not an argument that has made a great deal of impact.)

This analyses of music are a fascinating intellectual exercise, certainly, but I’m not sure that these methods are ever going to be any more successful at taxonomy and recommendation than user-contributed rating and tagging systems… and they’ll certainly never be as efficient in terms of resources expended. And they’ll never be able to assess that most nebulous and subjective of properties, quality

… or will they?

[ * Having just typed this rather flippantly, I am by no means certain that the future role of the critic/curator will be primarily one of recommendation. Will the open playing field offer more opportunity for in-depth criticism that people actually read and engage with for its own sake, or will it devolve into a Klausner-hive of “if you like (X), you’re gonna love (Y)”? ]


« Previous PageNext Page »