Tag Archives: artificial-intelligence

It’s a shame about Ray: Kurzweil not the only star in the Singularitarian firmament

George Dvorsky continues to take advantage of the recent famous-on-the-internet profile of the Kurzweil/Myers beef to bring lesser-discussed aspects of Singularitarianism to the fore… and as someone with an active interest in the movement (not to mention as a science fiction reader), I think that’s a worthwhile thing to do. Like I’ve said before, as way-out as it may still seem to a lot of people, the Singularity is an important concept in our wired world, even if viewed only with the utmost cynicism as a form of eschatological philosophy or techno-cult (which I think is to sell it more than a little short).

So here’s Dvorsky’s non-comprehensive list of notable Singularitarian thinkerswhich includes one well-known sf writer, Vernor Vinge, and one person (that I know of, at least) who has been tuckerized as a posthuman ‘species’ in science fiction literature: Hans Moravec, who gave his name to the moravecs of Dan Simmons’ Ilium, an excellent (if challenging and very hefty) novel.

Dvorsky invites suggestions of other thinkers worthy of attention in the fields of Singularity thinking and artificial intelligence, and I’ll extend the same invitation – feel free to include critics and naysayers, provided they tackle the issues with rigour.

And while we’re on the subject, you may or may not already know that PZ Myers has been called in for some serious heart surgery. Just in case it wasn’t already plain: despite not necessarily agreeing with him on matters recently discussed (and sniping at the tone taken), I bear the man no malice, and wish him a speedy recovery. Best of luck, Professor Myers.

Singularity slapfight: yet more Kurzweil vs. Myers

In the interests of following up on my earlier post about PZ Myers’ take-down of Ray Kurzweil’s claims about reverse engineering the human brain, and of displaying a lack of bias (I really don’t have a horse in this race, but I still enjoy watching them run, if that makes any sense), here’s some aftermath linkage.

Kurzweil himself responds [via SentientDevelopments]:

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

Al Fin declares that neither Kurzweil or Myers understand the brain [via AcceleratingFuture]:

But is that clear fact of mutual brain ignorance relevant to the underlying issue — Kurzweil’s claim that science will be able to “reverse-engineer” the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument — which can be summed up by Myers’ claim that Kurzweil is a “kook.”

But Kurzweil’s amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a “kook” is apparently considered clever in the intellectual circles which Mr. Myers’ and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Zing! Now, back to Myers:

In short, here’s Kurzweil’s claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it’s basic principles of operation completely within a few decades. My counterargument, which he hasn’t addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we’ve only scratched the surface in the last several decades of research, 3) “exponential” is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we’ll be ‘reverse engineering’ the human brain. He’s now at least clearly stating that decoding the genome does not generate the necessary information — it’s just an argument that the brain isn’t as complex as we thought, which I’ve already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don’t see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.

And, a little later still, after linking to some (fairly insubstantial) snark:

There are other, perhaps somewhat more serious, rebuttals at Rennie’s Last Nerve and A Fistful of Science.

Now run along, little obsessive Kurzweilians, there are many other blogs out there that regard your hero with derision, demanding your earnestly clueless rebuttals.

Smacks a little of “this is beneath me”, doesn’t it… or possibly even “can’t win, won’t fight”. Maybe I’m being unfair to Myers, but he’s certainly never backed off this easily when it comes to atheism and Darwin, and just a few days ago he was full of piss and vinegar. (Which isn’t to say I think he’s definitely wrong, of course; just that I expected a rather more determined attack…. not to mention less ad hominem and othering from someone who – quite rightfully – deplores such tactics when used by his usual opponents.)

Finally, George Dvorsky has a sort of condensed and sensationalism-free roadmap for AI from reverse engineering of the brain:

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

[…]

Inevitably the question as to ‘when’ crops up. Personally, I could care less. I’m more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

That’s pretty much why Dvorsky is one of my main go-to sources for transhumanist commentary; he’s one of the few self-identified members of the movement (of those that I’ve discovered, at least) who’s honest enough to admit when he doesn’t know something for certain.

I suspect that with Myers’ withdrawal from the field, that’s probably the end of this round. But as I said before, the greater intellectual battle is yet to be fought out, and this is probably just one early ideological skirmish.

Be sure to stock up on popcorn. 😉

How can a computer win at Jeopardy? Elementary, my dear Watson

This is not only an interesting story, but an engaging piece of journalism, and I heartily recommend you go read it: it’s an NYT magazine piece about Watson, an IBM artificial intelligence project headed by one David Ferucci that does something that artificial intelligences have heretofore been unable to do: beat human players at Jeopardy! [found in a tweet by @noahtron, which was retweeted by someone I follow who, regrettably, has slipped both my memory and my notetaking process – apologies for incomplete attribution]

I’ll pick out a few highlights for the short-on-time, but bookmark it for reading later anyway. We’ll start off with the methodology:

The great shift in artificial intelligence began in the last 10 years, when computer scientists began using statistics to analyze huge piles of documents, like books and news stories. They wrote algorithms that could take any subject and automatically learn what types of words are, statistically speaking, most (and least) associated with it. Using this method, you could put hundreds of articles and books and movie reviews discussing Sherlock Holmes into the computer, and it would calculate that the words “deerstalker hat” and “Professor Moriarty” and “opium” are frequently correlated with one another, but not with, say, the Super Bowl. So at that point you could present the computer with a question that didn’t mention Sherlock Holmes by name, but if the machine detected certain associated words, it could conclude that Holmes was the probable subject — and it could also identify hundreds of other concepts and words that weren’t present but that were likely to be related to Holmes, like “Baker Street” and “chemistry.”

In theory, this sort of statistical computation has been possible for decades, but it was impractical. Computers weren’t fast enough, memory wasn’t expansive enough and in any case there was no easy way to put millions of documents into a computer.

Those are no longer obstacles, of course, or at least not obstacles on the same scale. So, add multiple parallel algorithms, shake vigorously, and…

Watson’s speed allows it to try thousands of ways of simultaneously tackling a “Jeopardy!” clue. Most question-answering systems rely on a handful of algorithms, but Ferrucci decided this was why those systems do not work very well: no single algorithm can simulate the human ability to parse language and facts. Instead, Watson uses more than a hundred algorithms at the same time to analyze a question in different ways, generating hundreds of possible solutions. Another set of algorithms ranks these answers according to plausibility; for example, if dozens of algorithms working in different directions all arrive at the same answer, it’s more likely to be the right one. In essence, Watson thinks in probabilities. It produces not one single “right” answer, but an enormous number of possibilities, then ranks them by assessing how likely each one is to answer the question.

The result? Watson actually competes pretty well against players in the “winner cloud” of Jeopardy! performance, though it’s by no means cock of the rock. Not yet, anyway.

What made the article itself so enjoyable for me was the human story behind it – Ferucci comes across as a real Driven Man, striving to come first in a fiercely competitive and high-stakes scientific race:

Ferrucci refused to talk on the record about Watson’s blind spots. He’s aware of them; indeed, his team does “error analysis” after each game, tracing how and why Watson messed up. But he is terrified that if competitors knew what types of questions Watson was bad at, they could prepare by boning up in specific areas. I.B.M. required all its sparring-match contestants to sign nondisclosure agreements prohibiting them from discussing their own observations on what, precisely, Watson was good and bad at. I signed no such agreement, so I was free to describe what I saw; but Ferrucci wasn’t about to make it easier for me by cataloguing Watson’s vulnerabilities.

As with most AI projects, however, Watson only does one thing, though it (he?) does it pretty well. It’s a function with potential commercial uses (which is why IBM is still throwing money at Ferucci and team), but a general artificial intelligence needs to be able to do more than win at a certain quizshow format. The difficulties of producing a natural-language question-answering intelligence on a par with human learning were pretty neatly showcased by Wolfram|Alpha last year (which, despite being disappointing to the public, is a pretty impressive piece of work in its own right):

This, Wolfram says, is the deep challenge of artificial intelligence: a lot of human knowledge isn’t represented in words alone, and a computer won’t learn that stuff just by encoding English language texts, as Watson does. The only way to program a computer to do this type of mathematical reasoning might be to do precisely what Ferrucci doesn’t want to do — sit down and slowly teach it about the world, one fact at a time. […] Watson can answer only questions asking for an objectively knowable fact. It cannot produce an answer that requires judgment. It cannot offer a new, unique answer to questions like “What’s the best high-tech company to invest in?” or “When will there be peace in the Middle East?” All it will do is look for source material in its database that appears to have addressed those issues and then collate and compose a string of text that seems to be a statistically likely answer. Neither Watson nor Wolfram Alpha, in other words, comes close to replicating human wisdom.

So don’t go announcing the Singularity just yet, eh? Even so, it’s a pretty big leap that Ferucci and friends have made, and the practical applications should hopefully pay the way for more research. Weird times ahead… though Ferucci’s suggestion that Watson could replace call centre drones has a certain appeal.

Contextual Turing Tests

The thing with the Turing Test is that it’s such a loose experiment; it becomes (arguably) much easier to pass if the intelligence being tested is tuned for the context in which the test is being performed, and to the human respondent.

Here’s a QED via web-culture cynic and curmudgeon Nick Carr, who points out that Ben Weber has taken a leaf from ELIZA’s playbook and created a chatbot for the Starcraft online games:

Once every thirty seconds, EISBot randomly selects a message from a pool of 75 messages and sends it to the console. The results can be quite convincing:

  • EISBot: Whats up with Savior recently?
  • EISBot: Tasteless is my homeboy
  • EISBot: It says I need more pylons, what a pylon?
  • EISBot: What to ally victory?
  • Player: damn you
  • Player: im not stupid
  • EISBot: Wanna join my clan
  • Player: dunno
  • Player: im not that great
  • Player: lol
  • EISBot: So, this game is pretty much over
  • Player: yeah
  • Player: lol
  • Player: im playing off race
  • Player: lol
  • EISBot: Quit wasting time, come kill me
  • EISBot: What kind of hardware are you using? I’m on a 486 with 16M ram, pure luxory!!!
  • Player: i dunno

Says Carr, from beneath a grubby flatcap, with a wet-eyed grayhound curled up at his feet:

Note that the bot’s one major flaw is that its command of the English language, particularly the use of punctuation marks, is much too sophisticated in comparison with that of the human. The sure way to distinguish the computer’s messages from the human’s is to recognize that the computer has a rather sentimental attachment to the apostrophe and the comma.

I take this as another indication that I am correct in my suspicion that when computers finally pass the Turing test it won’t be because computers have become smarter; it will be because humans have become dumber.

Oh, how right you are, Mister Carr. Why, until maybe forty years ago when those pesky computers came on the scene, young people were almost universally literate, and spoke in long erudite sentences when talking with their peers on matters of mutual interest! How the mighty have fallen…

… although, with that said, three cats and a catnip-dusted keyboard would probably be enough to pass the Turing Test if it were conducted in a YouTube comment thread. YMMV.

The Grand Unified Theory of Artificial Intelligence

Artificial intelligence research has long harboured two basic (and opposed) approaches – the earlier method of trying to discover the “rules of thought”, and the more modern probabilistic approach to machine learning. Now some smart guy from MIT called Noah Goodman reckons he has reconciled the two approaches to artificial learning in his new model of thought [via SlashDot]:

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

“With probabilistic reasoning, you get all that structure for free,” Goodman says. A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries — and penguins, and caged and broken-winged robins — it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time — much the way humans learn new concepts and revise old ones.”

It’ll be interesting to watch the transhumanist and Singularitarian responses to this one, even if all they do is debunk Goodman’s approach entirely.