Tag Archives: mind-machine interface

Techlepathy: decoding words from brain signals

Another piece slots in to the mind-machine interface puzzle: via George Dvorsky comes news that University of Utah neuroboffins have decoded individual words from embedded electrode scans of brain activity.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy – temporary partial skull removal – so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals – such as those generated when the man said the words “yes” and “no” – they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

As always with this sort of story, though, it’s early days yet:

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time – better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person’s thoughts into words spoken by a computer.

“This is proof of concept,” Greger says, “We’ve proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful.”

So you’ll have to wait a little longer for that comfy little skull-cap that’ll read your as-yet-unwritten novel straight out of your head (worse luck). But proof-of-concept’s better than nothing, especially for a technology that – even comparatively recently – was considered to be pure science fiction.

The Tender Mash-up

Since I chose to write about things made of metal skins and electrical guts in November, and then about warm-blooded carbon-based life in December, I couldn’t resist a combination. I call it the tender mash-up because the fusion of man and machine might result in an emotional being with a huge leap forward in physical capacity. The popular television and movie characters Robocop and The Six Million Dollar Man may be coming close to reality. Continue reading The Tender Mash-up

Brain achieves motor memory with a prosthetic device

braindevelopMore progress has been made in the field of artificial telekinesis by researchers at University of California, who have shown that the brains of macacque monkeys can learn how to manipulate a prosthetic through thought alone:

…macaque monkeys using brain signals learned how to move a computer cursor to various targets. What the researchers learned was that the brain could develop a mental map of a solution to achieve the task with high proficiency, and that it adhered to that neural pattern without deviation, much like a driver sticks to a given route commuting to work.

“The profound part of our study is that this is all happening with something that is not part of one’s own body. We have demonstrated that the brain is able to form a motor memory to control a disembodied device in a way that mirrors how it controls its own body. That has never been shown before.”

This is an exciting development. Developing the means to control prosthetics as if they were part of your own body would improve the lives of paraplegics, and even offer the possibility of extending baseline human abilities.

[from Physorg, via KurzweilAI][image from Physorg]

Your new designer brain

neuroneA fascinating article in New Scientist on neural prosthesese and the possibility of a new source of inequality: between those who can afford to pay for technological mental enhancements and those who cannot:

People without enhancement could come to see themselves as failures, have lower self-esteem or even be discriminated against by those whose brains have been enhanced, Birnbacher says. He stops short of saying that enhancement could “split” the human race, pointing out that society already tolerates huge inequity in access to existing enhancement tools such as books and education.

The perception that some people are giving themselves an unfair advantage over everyone else by “enhancing” their brains would be socially divisive, says John Dupré at the University of Exeter, UK. “Anyone can read to their kids or play them music, but put a piece of software in their heads, and that’s seen as unfair,” he says. As Dupré sees it, the possibility of two completely different human species eventually developing is “a legitimate worry”.

But the news is not all bad, with the observation that the human brain is becoming ever more plastic and capable of adaptation:

Today, our minds are even more fluid and open to enhancement due to what Merlin Donald of Queens University in Kingston, Ontario, Canada, calls “superplasticity”, the ability of each mind to plug into the minds and experiences of countless others through culture or technology. “I’m not saying it’s a ‘group mind’, as each mind is sealed,” he says. “But cognition can be distributed, embedded in a huge cultural system, and technology has produced a huge multiplier effect.”

It is interesting to speculate what the long-term consequences of dense technological interconnectedness will be on the human condition. Even assuming actual precise neuroengineering proves difficult, neural prosthesese offer a world of opportunity.

[via KurzweilAI][image from n1/the larch on flickr]

Mind control – non-invasive mind-machine interface

OK, so it’s crude, but it’s a start – boffins at the Honda Research Institute have built a helmet packed with electronics that enables its wearer to control the movement of a robot just by thinking about it:

The helmet is the first “brain-machine interface” to combine two different techniques for picking up activity in the brain. Sensors in the helmet detect electrical signals through the scalp in the same way as a standard EEG (electroencephalogram). The scientists combined this with another technique called near-infrared spectroscopy, which can be used to monitor changes in blood flow in the brain.

Brain activity picked up by the helmet is sent to a computer, which uses software to work out which movement the person is thinking about. It then sends a signal to the robot commanding it to perform the move. Typically, it takes a few seconds for the thought to be turned into a robotic action.

Honda said the technology was not ready for general use because of potential distractions in the person’s thinking. Another problem is that brain patterns differ greatly between individuals, and so for the technology to work brain activity must first be analysed for up to three hours.

Well, a calibration period is inevitable; I expect they’ll shave that timescale down considerably, and in fairly short order. And then it’ll just be a case of waiting a decade or so before applying to be a mecha-warrior, like the strung-out teenagers in Ian McDonald’s story “Sanjeev and Robotwallah”.