Tag Archives: artificial-intelligence

Man to computer: RTFM

Via the dashing and debonair Ryan Oakley, researchers at MIT have managed to get a computer to do what most computer users never do, namely Read The Frackin’ Manual. And guess what – the computer’s performance at the task at hand improved hugely! The task in question was… playing Civilisation.

But the task isn’t the point, you see; this is about teaching machines to comprehend input in a linguistic fashion:

The MIT Computer Science and Artificial Intelligence lab has a computer that now plays Civilization all by itself — and it wins nearly 80% of the time. Those are better stats than most of us could brag about, but the real win here is the fact that instruction manuals don’t explain how to win a game, just how to play it.

The results may be game-oriented, but the real purpose for the experiment was to get a computer to do more than process words as data — and to actually process them as language. In this case, the computer read instructions on how to play a rather complex game, then proceeded to not only play that game, but to play it very well.

If you take the same process and replace gaming with something more real-world applicable, like medicine or automotive tech, you could have a computer that’s able to act as more than just a reference tool. A lot more.

If I’m grokking it right, this is the opposite of the approach embodied by IBM’s Watson, which is essentially a search engine on steroids; I’m reminded again of the Chomsky/Norvig debate, and MIT’s approach here looks to be much more in the Chomsky direction. I suspect some sort of synthesis of the two approaches will bring the best results in the long run.

 

Singularity linkage

A few more Singularitarian nuggets have drifted into my intertubes dragnet over the weekend. Having not had much chance to read and absorb, I’ll just throw ’em up for those of you who’ve not got distracted by the Shiny Of The Day (whatever that might be – I’m suffering from a case of Momentary Zeitgeist Disconnection here at the moment).

First up, Charlie Stross is back with a mention of “Federov’s Rapture”, a sort of proto-extropianism philosophy with its roots in Russian Orthodox Xtianity:

A devout Christian (of the Russian Orthodox variety), “Fedorov found the widespread lack of love among people appalling. He divided these non-loving relations into two kinds. One is alienation among people: ‘non-kindred relations of people among themselves.’ The other is isolation of the living from the dead: ‘nature’s non-kindred relation to men.'” … “A citizen, a comrade, or a team-member can be replaced by another. However a person loved, one’s kin, is irreplaceable. Moreover, memory of one’s dead kin is not the same as the real person. Pride in one’s forefathers is a vice, a form of egotism. On the other hand, love of one’s forefathers means sadness in their death, requiring the literal raising of the dead.”

Federov believed in a teleological explanation for evolution, that mankind was on the path to perfectibility: and that human mortality was the biggest sign of our imperfection. He argued that the struggle against death would give all humanity a common enemy — and a victory condition that could be established, in the shape of (a) achieving immortality for all, and (b) resurrecting the dead to share in that immortality. Quite obviously immortality and resurrection for all would lead to an overcrowded world, so Federov also advocated colonisation of the oceans and space: indeed, part of the holy mission would inevitably be to bring life (and immortal human life at that) to the entire cosmos.

I doubt that comparisons to religious eschatologies is going to be any better received than accusations of magical thinking, but hey. (As a brief sidebar, I was probably primed for my own interest in Singularitarianism by the redeployment of Teilhard de Chardin‘s Omega Point idea in Julian May’s Galactic Milieu series.)

And here’s another two from the admirably prolific Michael Anissimov. First up, The Illusion of Control in a Intelligence Amplification Singularity, which is a complex enough piece to make a simple summing-up into a futile exercise, so go read the whole thing – there’s some valuable thinking in there. Though the opening paragraph pretty much sums up my concerns about Singularitarianism:

From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions.

I can understand the risks; it’s the likelihood I remain to be convinced of. And given all the other serious global problems we’re facing right now, having the Singularity “outweigh all other concerns” strikes me as narrowly hyperopic at best. How’s about post-corporatist economics? Energy generation, distribution and storage? Sanitation? Resource logistics? Global suffrage and a truly democratic system of governance? Climate change? These all strike me as far more immediate and pressing threats to human survival. A hard-takeoff Singularity as posited here is an existential risk akin to a rogue asteroid strike: certainly not to be ignored, but the response needs to be proportional to the probability of it actually happening… and at the moment I think the asteroids are the more pressing concern, even for us folks lucky enough to have the economic and cognitive surplus to spend our time arguing about stuff on the intertubes.

Secondly, another riposte to Alex Knapp:

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don’t add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are “good enough” to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian — it wouldn’t have to furrow its brow and try real hard to obey simple probability theory axioms.

Knapp himself crops up in the comments with a counter-response:

What I question is the scientific basis from which artificial general intelligence can be developed. More specifically, my primary criticism of AGI is that we don’t actually know how the mechanism of intelligence works within the human brain. Since we don’t know the underlying physical principles of generalized intelligence, the likelihood that we’ll be able to design an artificial one is pretty small. [This reminds me of the Norvig/Chomsky debate, with Knapp siding with Chomsky’s anti-black-box attitude. – PGR]

Now, if you want to argue that computers will get smart at things humans are bad at, and therefore be a complement to human intelligence, not only will I not disagree with you, I will politely point out that that’s what I’ve been arguing THE WHOLE TIME.

More to come, I expect. I really need to convince someone to let me write a big ol’ paid piece about this debate, so I can justify taking a week to read up on it all in detail…

Singularity beef, day 2

Well, we’re off to a good start. Alex “Robot Overlords” Knapp also picked up on Stross’ skeptical post and Anissimov’s rebuttal thereof, and posted his own response. An excerpt:

Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even  human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans.  And I don’t mean on a different level — I mean actually different.  On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

Knapp’s argument here is familiar from other iterations of this debate, and basically hinges on what, for want of a better phrase, we might call neurological exceptionalism – the theory that human consciousness is an emergent function of human embodiment, and too complex to be replicated with pure hardware. (I’m maintaining my agnosticism, here, by the way; I know way too little about any or all of these fields of research to start coming to conclusions of my own. I have marks on my arse from being sat on the fence, and I’m just fine with that.)

But my biggest take-away from Knapp’s post, plus Ben Goertzel’s responses to such in the comments, and Mike Anissimov’s response at his own site? That the phrase “magical thinking” is the F-bomb of AI speculation, and gets taken very personally. Anissimov counters Knapp with some discussion of Bayesian models of brain function, which is interesting stuff. This paragraph is a bit odd, though:

Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can’t we agree that we should make responsible effort towards beneficial AI? Isn’t that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn’t as complicated and mystical as we had wished? [Emphasis as found in original.]

This appeal to an emotional or ethical response to the debate seems somewhat out of character, and the line about “toasting the superiority” feels a bit off; I don’t get any sense that Stross or Knapp want AI to be impossible or even difficult, and the rather crowing tone rolled out as Anissimov cheerleads for Goertzel’s ‘scolding’ of Knapp (delivered from the comfort of his own site) smacks more than a little of “yeah, well, tell that to my big brother, then”. There are two comments on that latter post from one Alexander Kruel that appear to point out some inconsistencies in Goertzel’s responses, also… though I’d note that I’m more worried by experts whose opinions never change than those who adapt their ideas to the latest findings. This is an instance where the language used in the defence of an argument is at least as interesting as the argument itself… or at least it is to me, anyway. YMMV, and all that.

The last word in today’s round-up goes to molecular biologist and regular Futurismic commenter Athena Andreadis, who has repubbed an essay she placed with H+ Magazine in late 2009. It’s an argument from biological principles against the possibility of reproducing consciousness on non-biological substrates:

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

No signs of anyone backing down from their corner yet, with the exception of Alex Knapp apologising for the “magical thinking” diss. Stay tuned for further developments… and do pipe up in the comments if there’s more stuff that I’m missing, or if you’ve your own take on the topic.

Stross starts Singularity slapfight

Fetch your popcorn, kids, this one will run for at least week or so in certain circles. Tonight’s challenger in the blue corner, it’s the book-writing bruiser from Edinburgh, Charlie Stross, coming out swinging:

I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. Any of these things would require me to prove the impossibility of a highly complex activity which nobody has really attempted so far.

However, I can make some guesses about their likelihood, and the prospects aren’t good.

And now, dukes held high in the red corner, Mike Anissimov steps into the ring:

I do have to say, this is a novel argument that Stross is forwarding. Haven’t heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is impossible. In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn’t find much — mainly just Dreyfuss’ What Computers Can’t Do and the people who argued against Kurzweil in Are We Spiritual Machines? “Human level AI is impossible” is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

Seriously, I just eat this stuff up – and not least because I’m fascinated by the ways different people approach this sort of debate. Rhetorical fencing lessons, all for free on the internet!

Me, I’m kind of an AI agnostic. I’ve believed for some time now that the AI question one of those debates that can only ever be truly put to rest by a conclusive success; failures only act as intellectual fuel for both sides.

(Though there is a delightfully piquant inversion of stereotypes when one sees a science fiction author being castigated for highlighting what he sees as the implausibility of a classic science fiction trope… and besides, I’d rather have people worrying about how to handle the emergence of a hard-takeoff Singularity than writing contingency plans for a zombie outbreak that will never happen.)

The Interrogation: a brief tale of AI and revolution

Hat-tip to George Mokray for emailing me about this one; Global Voices Online is carrying a translation of a short story by the once-imprisoned Chinese dissident netizen known as Stainless Steel Mouse… who, as her nickname might suggest, is well into her science fiction. “The Interrogation” is pretty short, highly allegorical (or so I’m assuming), and probably loses a great deal in translation, but personally I’m pleased to see sf ideas being used as metaphors for social change, and Stainless Steel Mouse’s courage and persistence – and that of others like her – should be an example for those of us in the West complaining about our governments running amok over our freedoms. In the grand scheme of things, we’ve still got it pretty easy, and the best most of us can manage is ranting about it in the comment threads of internet news stories.