Dresden Codak is one of my favourite webcomics; its creator, Aaron Diaz, is a staunch transhumanist, but rather than soapboxing directly he embeds his philosophical interests into his creative work. This occasionally spills over into brief satirical ripostes against anti-transhumanist naysayers; long-term followers may remember 2007’s “Enough is Enough – A Thinking Ape’s Critique of Trans-Simianism“, which (justifiably) did the rounds of the transhumanist, science fictional and geek-affiliated blogo-wotsit at the time.
Well, here’s another one, “Artificial Flight and Other Myths – a reasoned examination of A.F. by top birds“, which again takes the rhetorical gambit of reframing the AI argument outside of the human context:
We can start with a loose definition of flight. While no two bird scientists or philosophers can agree on the specifics, there is still a common, intuitive understanding of what true flight is: powered, feathered locomotion through the air through the use of flapping wings. While other flight-like phenomena exist in nature (via bats and insects), no bird with even a reasonable education would consider these creatures true fliers, as they lack one or more key elements. And, while some birds are unfortunately born handicapped (penguins, ostriches, etc.), they still possess the (albeit undeveloped) gene for flight, and it is indeed flight that defines the modern bird.
This is flight in the natural world, the product of millions of years of evolution, and not a phenomenon easily replicated. Current A.F. is limited to unpowered gliding; a technical marvel, but nowhere near the sophistication of a bird. Gliding simplifies our lives, and no bird (including myself) would discourage advancing this field, but it is a far cry from synthesizing the millions of cells within the wing alone to achieve Strong A.F. Strong A.F., as it is defined by researchers, is any artificial flier that is capable of passing the Tern Test (developed by A.F. pioneer Alan Tern), which involves convincing an average bird that the artificial flier is in fact a flying bird.
Diaz highlights the problem with anthropomorphic thinking as applied to definitions of intelligence, which is a common refrain from artificial intelligence advocates. Serendipitously enough, yesterday also saw Michael Anissimov point to a Singularity Institute document titled “Beyond Anthropomorphism”, which may be of interest if you want the argument fleshed out for you:
Anthropomorphic (“human-shaped”) thinking is the curse of futurists. One of the continuing themes running through Creating Friendly AI is the attempt to track down specific features of human thought that are solely the property of humans rather than minds in general, especially if these features have, historically, been mistakenly attributed to AIs.
Anthropomorphic thinking is not just the result of context-insensitive generalization. Anthropomorphism is the result of certain automatic assumptions that humans are evolved to make when dealing with other minds. These built-in instincts will only produce accurate results for human minds; but since humans were the only intelligent beings present in the ancestral environment, our instincts sadly have no built-in delimiters.
Many personal philosophies, having been constructed in the presence of uniquely human instincts and emotions, reinforce the built-in brainware with conscious reasoning. This sometimes leads to difficulty in reasoning about AIs; someone who believes that romantic love is the meaning of life will immediately come up with all sorts of reasons why all AIs will necessarily exhibit romantic love as well.
It strikes me that the yes-or-no question of whether strong general artificial intelligence is possible is one of a very special type, namely a question which can only be definitively answered by achieving the “yes” result. (I’m pretty sure there’s a distinct rhetorical term for that sort of question, but my minimal bootstrapped philosophy education fails to provide it to me at the moment; feel free to help out in the comments.) In other words, the only way we’ll truly know whether we can build a GAI is by building it; until then, it’s all just dialogue.
Basically, I don’t why we *couldn’t* eventually develop General (or specific) Artificial Intelligence.
Simply because the process exists (millions of ‘natural’ intelligences come into existence every year), is replicable, and so — by definition — is *possible*.
We may not understand it now, but that doesn’t mean the process is unfathomable. Eventually we will figure it out. The only question, to me, is *when*.
Also, do keep in mind that while weak (let alone strong) AI might seem hard to achieve right now, the hardware developments do go on, and Moore’s law — despite predictions to the contrary — continues to hold.
So by the time — the later, the better, possibly from the future AI’s point-of-view — AI does happen, it’ll have a fantastic environment to develop/evolve on.
And then, a technological singularity may not look improbable, but rather be inevitable…;-)
“Write me a creature that thinks as well as a man, or better than a man, but not like a man.”
OK, substitute “human” for “man,” otherwise there’s a fairly obvious answer…
Imagining that something is possible does not make it real.
The history of artificial intelligence is humbling to anyone who understands it. The others, such as Diaz, have to keep bumping their heads against what they don’t know that they don’t know.
Sarcasm is easy to do. Delivering on big promises is hard. Diaz chose the easy route.
Diaz fails to note that humans observed several wildly different natural examples of flight, and experienced many fatal failures, before developing the path leading to supersonic aircraft. Flight is a piece of cake compared to intelligence.
Much less big boasting please, and more problem solving, Diaz.