Tag Archives: neuroscience

Techlepathy: decoding words from brain signals

Another piece slots in to the mind-machine interface puzzle: via George Dvorsky comes news that University of Utah neuroboffins have decoded individual words from embedded electrode scans of brain activity.

The University of Utah research team placed grids of tiny microelectrodes over speech centers in the brain of a volunteer with severe epileptic seizures. The man already had a craniotomy – temporary partial skull removal – so doctors could place larger, conventional electrodes to locate the source of his seizures and surgically stop them.

Using the experimental microelectrodes, the scientists recorded brain signals as the patient repeatedly read each of 10 words that might be useful to a paralyzed person: yes, no, hot, cold, hungry, thirsty, hello, goodbye, more and less.

Later, they tried figuring out which brain signals represented each of the 10 words. When they compared any two brain signals – such as those generated when the man said the words “yes” and “no” – they were able to distinguish brain signals for each word 76 percent to 90 percent of the time.

As always with this sort of story, though, it’s early days yet:

When they examined all 10 brain signal patterns at once, they were able to pick out the correct word any one signal represented only 28 percent to 48 percent of the time – better than chance (which would have been 10 percent) but not good enough for a device to translate a paralyzed person’s thoughts into words spoken by a computer.

“This is proof of concept,” Greger says, “We’ve proven these signals can tell you what the person is saying well above chance. But we need to be able to do more words with more accuracy before it is something a patient really might find useful.”

So you’ll have to wait a little longer for that comfy little skull-cap that’ll read your as-yet-unwritten novel straight out of your head (worse luck). But proof-of-concept’s better than nothing, especially for a technology that – even comparatively recently – was considered to be pure science fiction.

Singularity slapfight: yet more Kurzweil vs. Myers

In the interests of following up on my earlier post about PZ Myers’ take-down of Ray Kurzweil’s claims about reverse engineering the human brain, and of displaying a lack of bias (I really don’t have a horse in this race, but I still enjoy watching them run, if that makes any sense), here’s some aftermath linkage.

Kurzweil himself responds [via SentientDevelopments]:

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

Al Fin declares that neither Kurzweil or Myers understand the brain [via AcceleratingFuture]:

But is that clear fact of mutual brain ignorance relevant to the underlying issue — Kurzweil’s claim that science will be able to “reverse-engineer” the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument — which can be summed up by Myers’ claim that Kurzweil is a “kook.”

But Kurzweil’s amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a “kook” is apparently considered clever in the intellectual circles which Mr. Myers’ and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Zing! Now, back to Myers:

In short, here’s Kurzweil’s claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it’s basic principles of operation completely within a few decades. My counterargument, which he hasn’t addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we’ve only scratched the surface in the last several decades of research, 3) “exponential” is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we’ll be ‘reverse engineering’ the human brain. He’s now at least clearly stating that decoding the genome does not generate the necessary information — it’s just an argument that the brain isn’t as complex as we thought, which I’ve already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don’t see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.

And, a little later still, after linking to some (fairly insubstantial) snark:

There are other, perhaps somewhat more serious, rebuttals at Rennie’s Last Nerve and A Fistful of Science.

Now run along, little obsessive Kurzweilians, there are many other blogs out there that regard your hero with derision, demanding your earnestly clueless rebuttals.

Smacks a little of “this is beneath me”, doesn’t it… or possibly even “can’t win, won’t fight”. Maybe I’m being unfair to Myers, but he’s certainly never backed off this easily when it comes to atheism and Darwin, and just a few days ago he was full of piss and vinegar. (Which isn’t to say I think he’s definitely wrong, of course; just that I expected a rather more determined attack…. not to mention less ad hominem and othering from someone who – quite rightfully – deplores such tactics when used by his usual opponents.)

Finally, George Dvorsky has a sort of condensed and sensationalism-free roadmap for AI from reverse engineering of the brain:

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

[…]

Inevitably the question as to ‘when’ crops up. Personally, I could care less. I’m more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

That’s pretty much why Dvorsky is one of my main go-to sources for transhumanist commentary; he’s one of the few self-identified members of the movement (of those that I’ve discovered, at least) who’s honest enough to admit when he doesn’t know something for certain.

I suspect that with Myers’ withdrawal from the field, that’s probably the end of this round. But as I said before, the greater intellectual battle is yet to be fought out, and this is probably just one early ideological skirmish.

Be sure to stock up on popcorn. 😉

Reasons not to worry about brain enhancement drugs

Professor Henry Greely reckons it’s high time (arf!) that we stopped trying to ban cognitive enhancement drugs and focus our attentions on developing rules governing their use [via SentientDevelopments]. It’s a pragmatic approach; as Greely points out, the current grey legality of “revision drugs” like Ritalin isn’t doing anything to stop their use, and as the pharmacological industry introduces more cognition-boosting chemicals onto the market (albeit ostensibly as treatments for various maladies of the mindmeat), that situation is unlikely to reverse itself.

Of course, lots of people are scared of the idea of brain enhancement, and there are some good reasons for that. But there are also some bad (or at least illogical) reasons. take it away, Mr Greely:

There are at least three unsound reasons for concern: cheating, solidarity, and naturalness.

Many people find the assertion that enhancement is cheating to be convincing. Sometimes it is: If rules or laws ban an enhancement, then using it is cheating. But that does not help in situations where there are no rules or the rules are still being determined. The problem with viewing enhancements as cheating is that enhancements, broadly defined, are ubiquitous. If taking a cognitive-enhancement drug before a college entrance exam is cheating, what about taking a prep course? Using a computer program for test preparation? Reading a book about taking the test? Drinking a cup of coffee the morning of the test? Getting a good night’s sleep before the test? To say that direct brain enhancement is inherently cheating is to require a standard of what the “right” competition is. What would be the generally accepted standard in our complex and only somewhat meritocratic society?

The idea of enhancement as cheating is also related to the idea that enhancement replaces effort. Yet the plausible cognitive enhancements would not eliminate the need to study; they would just make studying more effective. In any event, we do not reward effort, we reward success. People with naturally good memories have advantages over others in organic chemistry exams, but they did not work for that good memory.

Some argue that enhancement is unnatural and threatens to take us beyond our humanity. This argument, too, suffers from a major problem. All of our civilization is unnatural. A fair speaker could not fly across a continent, take a taxi to an air-conditioned auditorium, and give a microphone-assisted PowerPoint presentation decrying enhancement as unnatural without either a sense of humor or a good argument for why these enhancements are different. Because they change our physical bodies? So do medicine, good food, clothing, and a hundred other unnatural changes. Because they change our brains? So does education. What argument justifies drawing the line here and not there? A strong naturalness argument against direct brain enhancements, in particular, has not been—and I think cannot be—made. Humans have constantly been changing our world and ourselves, sometimes for better and sometimes for worse. A golden age of unenhanced naturalness is a myth, not an argument.

I’m guessing that most readers here are open to the idea of cognitive enhancement (by whatever method)… but even so, what’s the most compelling argument you’ve heard against it?

Neuroeconomics

What do you do with a discipline or field of endeavour that’s getting a bit stale and dated? Slapping the prefix neuro- onto it seems to be popular, and here’s the latest example: no one trusts economics any more (well, almost no one), so maybe that trust can be restored by looking at how trust itself – and the neurochemical basis for such – acts as a fundamental human component of the system that old economic models don’t account for [via BigThink]. Confused? Yeah, me too.

Zak and his collaborators at Claremont Graduate University have found that oxytocin, a hormone produced in the brain that promotes human bonding, plays a powerful role in shaping how generous people are. He calls it “the moral molecule.” “It’s a whole different model,” Zak says. “It tells us why global commerce works — because there is a motivation to reciprocate.”

People release oxytocin (pronounced ok-si-toh-sun) in settings that promote feelings of trust and safety, Zak has found, and their behavior becomes more trusting and generous in return. He envisions workplaces structured to reinforce this cycle.

[…]

Although Zak preaches the power of markets, he strongly agrees that rational-actor models fall short. “Economists get a bad rap for doing what I call ‘imaginary economics,’” he says. “You sit in your office, imagine some situation and scribble down a model. You get excited about it, ship it to your friends and publish it in a journal. It has nothing to do with any problem in the world.

“What neuroeconomics does is put human beings back in the center of economics. I can go inside the brain and measure what’s happening.”

[…]

Vernon Smith showed that people are naturally more generous than decision theory would predict. But what would happen if the players’ moods were enhanced by oxytocin? Zak had some of his game subjects inhale oxytocin before playing the Trust Game. Remarkably, more than twice as many people on oxytocin sent all of their money to a stranger (versus control subjects who were administered a placebo).

This is compelling evidence that oxytocin helps us to decide whom to trust and when to reciprocate, Zak says. “Civilization is dependent on oxytocin,” he says. “You can’t live around people you don’t know intimately unless you have something that says, ‘Him I can trust, and this one I can’t trust.’”

Obviously we can’t just start dosing people up with neurochemicals in an attempt to make the world a better place (or could we?), but the theory is that we can build a social and economic environment where people are more likely to have their oxytocin levels raised, leading to a sort of virtuous reciprocal circle of generosity. But if you’re thinking it sounds like something of a utopian technofix, don’t worry – this Zak character is looking at the bigger picture:

“How can we make the world more trusting, more cooperative, more generous? It’s not all oxytocin. It’s a much bigger brain circuit. It’s the people interacting with you, it’s the environment within which you interact — all those things matter. We have to peel away the layers of the onion to figure out how all those things fit together.”

All well and good… provided the current system doesn’t break irreparably before we’ve peeled our metaphor. Er, onion.

Close conversation really is a meeting of minds

Behind the inevitable allusions to Star Trek, this is an interesting story: scientific evidence that the brain waves of someone listening closely to another person’s speech can synchronise with them.

The evidence comes from fMRI scans of 11 people’s brains as they listened to a woman recounting a story.

The scans showed that the listeners’ brain patterns tracked those of the storyteller almost exactly, though trailed 1 to 3 seconds behind. But in some listeners, brain patterns even preceded those of the storyteller.

“We found that the participants’ brains became intimately coupled during the course of the ‘conversation’, with the responses in the listener’s brain mirroring those in the speaker’s,” says Uri Hasson of Princeton University.

Hasson’s team monitored the strength of this coupling by measuring the extent of the pattern overlap. Listeners with the best overlap were also judged to be the best at retelling the tale. “The more similar our brain patterns during a conversation, the better we understand each other,” Hasson concludes.

Apparently (and completely unsurprisingly) an unfamiliar language acts as a barrier to this synchronisation – if you can’t understand the person who’s speaking, you can’t “click” with them. This is probably the best argument for a single global language that I can think of… but I wonder if poor comprehension of the same language would produce similar results to a completely foreign language?