Fetch your popcorn, kids, this one will run for at least week or so in certain circles. Tonight’s challenger in the blue corner, it’s the book-writing bruiser from Edinburgh, Charlie Stross, coming out swinging:
I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. Any of these things would require me to prove the impossibility of a highly complex activity which nobody has really attempted so far.
However, I can make some guesses about their likelihood, and the prospects aren’t good.
And now, dukes held high in the red corner, Mike Anissimov steps into the ring:
I do have to say, this is a novel argument that Stross is forwarding. Haven’t heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is impossible. In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn’t find much — mainly just Dreyfuss’ What Computers Can’t Do and the people who argued against Kurzweil in Are We Spiritual Machines? “Human level AI is impossible” is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.
Seriously, I just eat this stuff up – and not least because I’m fascinated by the ways different people approach this sort of debate. Rhetorical fencing lessons, all for free on the internet!
Me, I’m kind of an AI agnostic. I’ve believed for some time now that the AI question one of those debates that can only ever be truly put to rest by a conclusive success; failures only act as intellectual fuel for both sides.
(Though there is a delightfully piquant inversion of stereotypes when one sees a science fiction author being castigated for highlighting what he sees as the implausibility of a classic science fiction trope… and besides, I’d rather have people worrying about how to handle the emergence of a hard-takeoff Singularity than writing contingency plans for a zombie outbreak that will never happen.)