Tag Archives: interface

QWOP, GIRP and the Construction of Video Game Realism

 

1: A Problematic Concept

Whenever mainstream news outlets mention video games I cringe. I cringe because every time traditional news outlets move beyond their traditional territory and reach out to an unfamiliar cultural milieu in an effort to appear plugged in, they invariably wind up making both themselves and that cultural milieu look awful. The awfulness comes from the fact that journalists in unfamiliar territory tend to take authority figures at face value and, in the world of video games, this generally results in precisely the sort of hyperbolic bullshit that makes video game journalism such an oxymoron. Continue reading QWOP, GIRP and the Construction of Video Game Realism

Cortical coprocessors: an outboard OS for the brain

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…

Gestural interface: like a Wacom tablet, just without the plastic bits

Via SlashDot, here’s a project from Potsdam University in which the clever boffins have built a user interface that requires only hand gestures as input:

We present Imaginary Interfaces, screen-less devices that allow users to perform spatial interaction with empty hands and without visual feedback. Unlike projection-based solutions, such as Sixth Sense, all “feedback” takes place in the user’s imagination. Users define the origin of an imaginary space by forming an L-shaped coordinate cross with their non-dominant hand. Users then point and draw with their dominant hand in the resulting space. The interaction is tracked by a tiny camera device clipped to the user’s clothing and pointed at the user’s hands.

A bit rough and ready, sure, but it’s early days. Bolt this onto AR (they both need similar face-mounted hardware, so convergence is pretty inevitable), and stuff gets weird real quick. Cities full of people wandering around, seemingly talking to themselves and waving their hands in gnomic gestures… it’d look like a city of mad magicians.

Or, y’know, like Burning Man or Glastonbury at 5am on a Saturday. 🙂

Touchscreen tech goes 3D

People keep doing clever stuff with touchscreen interfaces, despite a continuing dearth of products bigger than a smartphone that actually include one. Some chaps from the University of Potsdam have been working at making a Microsoft Surface touchscreen computer detect items that aren’t necessarily directly in contact with it:

Each Lumino block has a pattern on its base that identifies its 3D shape, and the Surface table can read them using its four internal cameras that peer up at the acrylic top. That means the computer can build up a 3D picture of what lies on its surface.

The Luminos can also make themselves known to the Surface when they’re stacked up, however. They are packed with fibre-optic threads that ferry the pattern of any block placed on top of another down to the screen. So, although a second storey Lumino isn’t in direct contact with the touch screen, the computer knows it’s there.

As blocks stack up, the risk increases that the patterns from different layers of Luminos will become too jumbled for the screen to interpret. But the fibre-optic bundles are angled so that the pattern visible to the screen at the bottom of a stack includes parts of the patterns of all its blocks. That can allow the screen to recognise stacks up to 10 blocks high.

I really want some hardware like that for use as a combined coffee-table and workbench… though I think I’ll wait until someone other than Microsoft is making them.