Techlepathy: decoding words from brain signals

Paul Raven @ 20-09-2010

Another piece slots in to the mind-machine interface puzzle: via George Dvorsky comes news that University of Utah neuroboffins have decoded individual words from embedded electrode scans of brain activity.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy – temporary partial skull removal – so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals – such as those generated when the man said the words “yes” and “no” – they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

As always with this sort of story, though, it’s early days yet:

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time – better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person’s thoughts into words spoken by a computer.

“This is proof of concept,” Greger says, “We’ve proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful.”

So you’ll have to wait a little longer for that comfy little skull-cap that’ll read your as-yet-unwritten novel straight out of your head (worse luck). But proof-of-concept’s better than nothing, especially for a technology that – even comparatively recently – was considered to be pure science fiction.


Stoned neural networks, wet computers and audio Darwinsim

Paul Raven @ 13-01-2010

Here’s a handful of links from the weird and wonderful world of computer science…

First of all, Telepathic-critterdrug is described as “a controversial fork of the open source artificial-life sim Critterding, a physics sandbox where blocky creatures evolve neural nets in a survival contest. What we’ve done is to give these animals an extra retina which is shared with the whole population. It’s extended through time like a movie and they can write to it for communication or pleasure. Since this introduces the possibility of the creation of art, we decided to give them a selection of narcotics, stimulants and psychedelics. This is not in Critterding. The end result is a high-color cellular automaton running on a substrate that thinks and evolves, and may actually produce hallucinations in the user.

You can download your own copy of this bizarre experiment to play with. Quite what it’s supposed to achieve (other than entertaining its creators) I’m not entirely sure… but then again, that’s what we tend to think about the reality we inhabit, so maybe there’s some sort of simulation-theory microcosm metaphor that could be applied here, eh?

Next up, wetware is about to make the transition from science fictional neologism to genuine branch of technological research; boffins at the University of Southampton are hosting an international collaboration aimed at making a chemical computer based on biological principles [via SlashDot].

The goal is not to make a better computer than conventional ones, said project collaborator Klaus-Peter Zauner […] but rather to be able to compute in new environments.

“The type of wet information technology we are working towards will not find its near-term application in running business software,” Dr Zauner told BBC News.

“But it will open up application domains where current IT does not offer any solutions – controlling molecular robots, fine-grained control of chemical assembly, and intelligent drugs that process the chemical signals of the human body and act according to the local biochemical state of the cell.

And last but not least, DarwinTunes is an experiment by two ICL professors to see whether they can use genetic algorithms to “evolve” enjoyable music from chaos, using the feedback of human listeners [via MetaFilter]. The DarwinTunes project website is sadly lacking a page that explains the project in a nutshell (or at least one that’s easily located by a first-time visitor), but a bit of poking around in the early blog entries should reveal the details. Or you can just listen to their 500th-generation riffs and loops from the project, which is still running.


"Mind-reading" machines beginning to appear

Edward Willett @ 12-03-2008

RobertFuddBewusstsein17Jh A neckband that picks up nerve signals and translates them into speech has been demonstrated for the first time. (Via NewScientistTech.)

With training, a user can send nerve signals to their vocal cords without making a sound that the neckband picks up and relays wirelessly to a computer, which then converts them into words spoken by a voice synthesizer.

This same device has been used to let people control wheelchairs using their thoughts.

Currently the system, called Audeo, can only recognize a limited set of about 150 words and phrases, but by the end of the year there’s supposed to be an improved version without a vocabulary limit. Although it will be slower–it’s based on phonemes, not whole words–it will allow people to say whatever they want, and should be a boon to people who have lost the ability to speak due to disease or injury.

It’s not the only “mind-reading” technology that’s been in the news recently, either. Researchers at the University of California have developed a system that uses functional MRI data to decode information from the visual cortex. Using it, the scientists were able to figure out which of more than 100 previously unseen photographs subjects were looking at.

What lies at the end of that road? Possibly the ability to access dreams and memories–assuming the way the brain processes dreams is analogous to visual stimuli.

And then there’s the “mind-reading” car that monitors a driver’s brain activity and reduces the amount of information displayed on the dash during stressful periods. In tests, the system has speeded up driver’s reactions by as much as 100 milliseconds–equivalent to reducing braking distance by nearly three metres at 100 kilometres per hour.

The phrase “I know what you’re thinking” has thus far only been addressed by one human to another, and only in a metaphorical sense.

Someday soon, our machines could make it literal.

(Image: Wikimedia Commons.)

[tags]technology, brain, telepathy, communication[/tags]