The Grand Unified Theory of Artificial Intelligence

Artificial intelligence research has long harboured two basic (and opposed) approaches – the earlier method of trying to discover the “rules of thought”, and the more modern probabilistic approach to machine learning. Now some smart guy from MIT called Noah Goodman reckons he has reconciled the two approaches to artificial learning in his new model of thought [via SlashDot]:

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

“With probabilistic reasoning, you get all that structure for free,” Goodman says. A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries — and penguins, and caged and broken-winged robins — it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time — much the way humans learn new concepts and revise old ones.”

It’ll be interesting to watch the transhumanist and Singularitarian responses to this one, even if all they do is debunk Goodman’s approach entirely.