Tag Archives: research

Blindsight’s origins uncovered

No, not the (excellent) Peter Watts novel… but the neurological phenomenon for which it is named. Ars Technica boils down a new paper published in Nature:

The authors worked with two macaques that have small lesions in their primary visual cortexes, which leave them unable to respond to visual cues in a subset of their visual field. A fair amount of work went in to defining precisely the areas within the visual field that were no longer effective, and confirming that stimuli in those areas could still induce activity (measured via functional MRI) in the remaining visual cortexes.

The authors then focused on a structure called the lateral geniculate nucleus (LGN), which acts as a relay point for signals travelling between the retina and the primary visual cortex. Other work had shown that the LGN also has projections to a number of secondary visual areas, suggesting that it may serve as a major hub in the visual system.

To test this suggestion, the authors injected the LGN with a chemical that activates the receptor for a major inhibitory signaling molecule (the chemical, THIP, is what’s termed a “GABAA-receptor agonist”). When the chemical is present, nerve cells receive a signal telling them to stop signaling, so this this injection has the effect of shutting the LGN down entirely.

The treatment was highly effective. With the LGN shut down, visual stimuli that normally induce a blindsight response didn’t elicit any response from the visual centers of the macaques.

And here’s a blindness-related bonus story with some feel-good we-can-fix-anything-eventually overtones (as well as some science-not-the-work-of-Beelzebub-after-all undertones) to set you up for the weekend: restoring sight to blinded human patients with stem cell therapy. Yay, science!

How can a computer win at Jeopardy? Elementary, my dear Watson

This is not only an interesting story, but an engaging piece of journalism, and I heartily recommend you go read it: it’s an NYT magazine piece about Watson, an IBM artificial intelligence project headed by one David Ferucci that does something that artificial intelligences have heretofore been unable to do: beat human players at Jeopardy! [found in a tweet by @noahtron, which was retweeted by someone I follow who, regrettably, has slipped both my memory and my notetaking process – apologies for incomplete attribution]

I’ll pick out a few highlights for the short-on-time, but bookmark it for reading later anyway. We’ll start off with the methodology:

The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. Using this method, you could put hundreds of articles and books and movie reviews discussing Sherlock Holmes into the computer, and it would calculate that the words “deerstalker hat” and “Professor Moriarty” and “opium” are frequently correlated with one another, but not with, say, the Super Bowl. So at that point you could present the computer with a question that didn’t mention Sherlock Holmes by name, but if the machine detected certain associated words, it could conclude that Holmes was the probable subject — and it could also identify hundreds of other concepts and words that weren’t present but that were likely to be related to Holmes, like “Baker Street” and “chemistry.”

In theory, this sort of statistical computation has been possible for decades, but it was impractical. Computers weren’t fast enough, memory wasn’t expansive enough and in any case there was no easy way to put millions of documents into a computer.

Those are no longer obstacles, of course, or at least not obstacles on the same scale. So, add multiple parallel algorithms, shake vigorously, and…

Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one. In essence, Watson thinks in probabilities. It produces not one single “right” answer, but an enormous number of possibilities, then ranks them by assessing how likely each one is to answer the question.

The result? Watson actually competes pretty well against players in the “winner cloud” of Jeopardy! performance, though it’s by no means cock of the rock. Not yet, anyway.

What made the article itself so enjoyable for me was the human story behind it – Ferucci comes across as a real Driven Man, striving to come first in a fiercely competitive and high-stakes scientific race:

Ferrucci refused to talk on the record about Watson’s blind spots. He’s aware of them; indeed, his team does “error analysis” after each game, tracing how and why Watson messed up. But he is terrified that if competitors knew what types of questions Watson was bad at, they could prepare by boning up in specific areas. I.B.M. required all its sparring-match contestants to sign nondisclosure agreements prohibiting them from discussing their own observations on what, precisely, Watson was good and bad at. I signed no such agreement, so I was free to describe what I saw; but Ferrucci wasn’t about to make it easier for me by cataloguing Watson’s vulnerabilities.

As with most AI projects, however, Watson only does one thing, though it (he?) does it pretty well. It’s a function with potential commercial uses (which is why IBM is still throwing money at Ferucci and team), but a general artificial intelligence needs to be able to do more than win at a certain quizshow format. The difficulties of producing a natural-language question-answering intelligence on a par with human learning were pretty neatly showcased by Wolfram|Alpha last year (which, despite being disappointing to the public, is a pretty impressive piece of work in its own right):

This, Wolfram says, is the deep challenge of artificial intelligence: a lot of human knowledge isn’t represented in words alone, and a computer won’t learn that stuff just by encoding English language texts, as Watson does. The only way to program a computer to do this type of mathematical reasoning might be to do precisely what Ferrucci doesn’t want to do — sit down and slowly teach it about the world, one fact at a time. […] Watson can answer only questions asking for an objectively knowable fact. It cannot produce an answer that requires judgment. It cannot offer a new, unique answer to questions like “What’s the best high-tech company to invest in?” or “When will there be peace in the Middle East?” All it will do is look for source material in its database that appears to have addressed those issues and then collate and compose a string of text that seems to be a statistically likely answer. Neither Watson nor Wolfram Alpha, in other words, comes close to replicating human wisdom.

So don’t go announcing the Singularity just yet, eh? Even so, it’s a pretty big leap that Ferucci and friends have made, and the practical applications should hopefully pay the way for more research. Weird times ahead… though Ferucci’s suggestion that Watson could replace call centre drones has a certain appeal.

New MMO character type: the sociologist

I’ve been chattering on about the sociology of the metaverse for what feels like yonks (and long since it stopped being a trending topic), but academic interest in synthetic worlds and virtual realities shows no sign of abating, according to Ars Technica‘s round-up of recent MMO-related papers, journals and real-money research grants [via Nick Harkaway].

Give it a few more years, and there’ll be embedded ARG/MMO anthropologists. That’s the year you’ll see me heading back into the education system… 🙂

An Old Enemy: Fighting Cancer

So how did I go from last month’s topic about geoengineering to cancer treatment? Well, for one, keeping the Earth healthy is a bit like doing the same for humans: harder than you’d think. Systems engineering on a fairly complex level that we don’t entirely understand. This is also a personal topic. Cancer used to be an academic concept for me. Not any more. Science fiction lost a brilliant voice to cancer earlier this year, when Kage Baker died of it. Now I have friends and family with cancer, and it has become a palpable evil rather than something distant that I don’t want, like elephantiasis or malaria. I’ve seen it, and I don’t like it. Continue reading An Old Enemy: Fighting Cancer

NeuroLitCrit

As part of our seemingly ongoing (though erratic) series of posts with “neuro” in the title, here’s The Guardian on a new bridge discipline between the arts and the sciences: neuro lit crit.

Later this year a group of 12 students in New England will be given a series of specially designed texts to read. Then they will be loaded into a hospital MRI machine and their brains scanned to map their neurological responses.

The scans produced will measure blood flow to the firing synapses of their brain cells, allowing a united team of scientists and literature professors to study how and why human beings respond to complex fiction such as the works of Marcel Proust, Henry James or Virginia Woolf.

What, no sf titles? Surely – if you’re going to engage in such an inherently postmodern activity as neuro lit crit in the first place – you might as well go fully meta, and examine the brain activity of people reading fiction that discusses the science of brain activity…

And here’s another researcher, co-opting literary criticism in the name of advancing that insidious atheistic baby-eating Communo-Darwinist agenda I keep hearing so much about:

Vermeule is examining the role of evolution in fiction: some call it “Darwinian literary studies”. It looks at how human genetics and evolutionary theory shape and influence literature, or at how literature itself may be an expression of evolution. For instance, the fact that much of human fiction is about the search for a suitable mate should suggest that evolutionary forces are at play. Others agree that fiction can be seen as promoting social cohesion or even giving lessons in sexual selection. “It is hard to interpret fiction without an evolutionary view,” said Professor Jonathan Gottschall at Washington and Jefferson College, Pennsylvania.

Hah! That won’t get you far with The Greatest Story Ever Told, “Professor”! If we evolved from dinosaurs, why aren’t there any dinosaurs in the Bible, eh? Tell me that.

Ahem.

Much as with the afore-mentioned neurocinematics, I’m sure someone will hit on the idea of using neuro lit crit for tailoring books that produce the right sort of brain spikes, and prompt a race to the bottom in literary value that will make the pulp magazine explosion look like a damp squib*. I guess our last best hope is that the profit margins will be too small to make it worthwhile… while I’ve complained a few times about wanting a little more science in my fiction, this isn’t what I meant at all. 😉

[ * – Note for the inevitable handful easily-riled genre traditionalists, who will doubtless head straight for the comments box anyway: this sentence is to be read with heavy irony, as is the rest of the post. ]