Tag Archives: probabilistic

Probabilistic processing: the analogue computer waits in the wings

Digital processing has the advantage of versatility – the utter ubiquity of computer technology is a testament to that. But digital logic has to use lots of bits to represent large ranges of values; perhaps some applications – spam filtering, for instance, or pattern analysis – would run better and faster on a system that allowed for analogue values “in the raw”, so to speak?

Lyric’s innovation is to use analogue signals instead of digital ones, to allow probabilities to be encoded directly as voltages. Their probability gates represent zero probability as 0 V, and certainty as VDD. But unlike digital logic, for which these are the only options, Lyric’s technology allows probabilities between 0 and 1 to use voltages between 0 and VDD. Each probabilistic bit (“pbit”) stores not an exact value, but rather, the probability that the value is 1. The technology allows a resolution of about 8 bits; that is, they can discriminate between about 28 = 256 different values (different probabilities) between 0 and VDD.

By creating circuits that can operate directly on probabilities, much of the extra complexity of digital circuits can be eliminated. Probabilistic processors can perform useful computations with just a handful of pbits, with a drastic reduction in the number of transistors and circuit complexity as a result.

This could so easily be an excerpt from a Rudy Rucker story… or a Neal Stephenson novel, for that matter.

The Grand Unified Theory of Artificial Intelligence

Artificial intelligence research has long harboured two basic (and opposed) approaches – the earlier method of trying to discover the “rules of thought”, and the more modern probabilistic approach to machine learning. Now some smart guy from MIT called Noah Goodman reckons he has reconciled the two approaches to artificial learning in his new model of thought [via SlashDot]:

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

“With probabilistic reasoning, you get all that structure for free,” Goodman says. A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries — and penguins, and caged and broken-winged robins — it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time — much the way humans learn new concepts and revise old ones.”

It’ll be interesting to watch the transhumanist and Singularitarian responses to this one, even if all they do is debunk Goodman’s approach entirely.