Singularity beef, day 2

Paul Raven @ 24-06-2011

Well, we’re off to a good start. Alex “Robot Overlords” Knapp also picked up on Stross’ skeptical post and Anissimov’s rebuttal thereof, and posted his own response. An excerpt:

Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even  human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans.  And I don’t mean on a different level — I mean actually different.  On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

Knapp’s argument here is familiar from other iterations of this debate, and basically hinges on what, for want of a better phrase, we might call neurological exceptionalism – the theory that human consciousness is an emergent function of human embodiment, and too complex to be replicated with pure hardware. (I’m maintaining my agnosticism, here, by the way; I know way too little about any or all of these fields of research to start coming to conclusions of my own. I have marks on my arse from being sat on the fence, and I’m just fine with that.)

But my biggest take-away from Knapp’s post, plus Ben Goertzel’s responses to such in the comments, and Mike Anissimov’s response at his own site? That the phrase “magical thinking” is the F-bomb of AI speculation, and gets taken very personally. Anissimov counters Knapp with some discussion of Bayesian models of brain function, which is interesting stuff. This paragraph is a bit odd, though:

Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can’t we agree that we should make responsible effort towards beneficial AI? Isn’t that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn’t as complicated and mystical as we had wished? [Emphasis as found in original.]

This appeal to an emotional or ethical response to the debate seems somewhat out of character, and the line about “toasting the superiority” feels a bit off; I don’t get any sense that Stross or Knapp want AI to be impossible or even difficult, and the rather crowing tone rolled out as Anissimov cheerleads for Goertzel’s ‘scolding’ of Knapp (delivered from the comfort of his own site) smacks more than a little of “yeah, well, tell that to my big brother, then”. There are two comments on that latter post from one Alexander Kruel that appear to point out some inconsistencies in Goertzel’s responses, also… though I’d note that I’m more worried by experts whose opinions never change than those who adapt their ideas to the latest findings. This is an instance where the language used in the defence of an argument is at least as interesting as the argument itself… or at least it is to me, anyway. YMMV, and all that.

The last word in today’s round-up goes to molecular biologist and regular Futurismic commenter Athena Andreadis, who has repubbed an essay she placed with H+ Magazine in late 2009. It’s an argument from biological principles against the possibility of reproducing consciousness on non-biological substrates:

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

No signs of anyone backing down from their corner yet, with the exception of Alex Knapp apologising for the “magical thinking” diss. Stay tuned for further developments… and do pipe up in the comments if there’s more stuff that I’m missing, or if you’ve your own take on the topic.

Be Sociable, Share!

14 Responses to “Singularity beef, day 2”

  1. Athena Andreadis says:

    Thank you for the mention, Paul. However, I don’t say consciousness is impossible in a non-biological substrate. It may be possible, but it won’t be the same as one based on a biological substrate. Also, my arguments are specifically about the impossibility of individual continuity via uploading.

    On the bigger picture, I just *love* the false equation of “If you don’t like my pet analogy, you’re mystical/religious/obstructing “real” science!” I guess many of these guys are still at the emotional age when Big Brother arguments (including being herded around by godly AIs) are still appealing.

  2. Alex Knapp says:

    Paul,

    Thanks for the mention, too. This is interesting to me because I got interested in the Singularity back when I wrote a paper on the legal implications of the Singularity in law school. At the time, I was totally sold on Kurzweil. Then I started reading more about it and the more I learned about neuroscience and programming, the more ridiculous it seemed.

    I think it’d be cool if we had AGI, but I would also be rather surprised to get it.

    To me, the Singularity is modern-day alchemy.

  3. Sjef says:

    I for one can’t wait for some Japanese scientist to take this Singularity Beef and give Sentient Burgers!

    Sorry, that’s all I’ve got. Carry on..

  4. Sjef says:

    Uh, give *us* Sentient Burgers.

    *grabs coat, makes hasty exit.

  5. Paul Raven says:

    Apologies for the misrepresentation, Athena; it reveals more about the shallowness of my understanding of the deeper issues than anything else.

    Alex: alchemy is an interesting metaphor, because it implies a nascent protoscience waiting to emerge from early conceptual fumblings. That said, I doubt it’s going to rate much higher on the scale of appeal than the Magical Thinking label!

    For my money, I don’t preclude the possibility of a whole variety of singularities, of which hard-takeoff emergent AI is only one. However, I think Singularitarianism – namely the belief that such a transcendent gamechanging moment is not only inevitable but desirable, and thanks in no small part to acquiring a lot of adherents far less rigorous than Goertzel, Anissimov et al – is increasingly carrying the hallmarks of a cult

    … which, I suspect, is why Charlie likes to poke it with a stick every now and again. :)

  6. Rick York says:

    This is long but, I’ve been musing on this since reading Charlie’s post and the extraordinary comments. Anyone interested in the Singularity and its attendant phenomena should read Charlie’s post and the comments.

    There’s a fundamental problem with those predicting machine intelligence. To wit, we cannot even really define human intelligence, or consciousness or sentience. Our understanding of consciousness (or, intelligence, if you prefer)is like Justice Stewart’s explanation of pornography, “I know it when I see it”.

    There are brilliant people working very hard to identify the neurological basis of consciousness. But, people like Damasio, Ramachandran and Churchland and most others would be the first to say they’re nowhere close to a rigorous definition or explanation of consciousness.

    Most observers seem to acknowledge that the mind (to use the broadest word available) is probably an emergent phenomenon. And, more and more scientists recognize that minds are deeply interwoven with bodies. If that premise is accurate, machine intelligences would be as alien to us as a sentient from outer space. And, if machine intelligence does develop, I’m not sure we’d even recognize it. The Turing Test probably wouldn’t apply.

    Other brilliant minds, like Penrose, believe consciousness may be tied to quantum phenomena which science still doesn’t fully understand. (Remember Feynman’s famous diktat: “If you think you understand quantum physics, you don’t understand [it].”)
    Because the whole Singularity thing depends so deeply on AGI, it seems to me that we are far away and much development for such intelligence to be delivered, even at CPU speeds. The human brain has 100 billion neurons and somewhere between 10E13 and 10E15 connections between those neurons.

    I need to say here that I am not a dualist. I firmly believe that intelligence and consciousness and sentience are based on matter. I just wonder if we will ever be able to develop a consensual and rigorous understanding of the mind.

    As for the Singularity, I’m not opposed to it. Still, there’s a reason it’s called the Geek Rapture – because, like religious rapture, it’s based on faith rather than reason and scientific investigation. On the other hand, I could be wrong :).

  7. jon says:

    “There’s a fundamental problem with those predicting machine intelligence. To wit, we cannot even really define human intelligence, or consciousness or sentience.”

    Absolutely agree with all of @Rick York’s points, which is why we’re bound to see several (if not thousands or millions) ‘false alarm’ singularities where a particular AI is claimed to be sentient by large numbers of people. Perhaps even trusted to advise or lead them; seemingly superior to human thought and decision making but with no universally accepted test of its abilities. I wonder if a movement of ‘AI skeptics’ will arise to combat all anecdotal claims of machine or other intelligence, and like scientific skeptics today, they will often find themselves in the minority or shouted down by the believers.

  8. Michael Anissimov says:

    I DO get the feeling that Stross or Knapp want AI to be impossible or even difficult. Look at Knapp’s post on mind uploading — he predicts that a galactic empire will rise and fall before we simulate the human brain. Pretty hyperbolic if you ask me. Stross uses his personal lack of desire for “volitional”, non-tool-like AI as an argument that no one will ever try to build it.

  9. Michael Anissimov says:

    Also, Goertzel definitely isn’t my ‘big brother’ — what I say to Knapp stands on its own merits, but yes I concur with his response in this context. Also, my tone is always crowing, because blogging is too boring otherwise.

  10. Alex Knapp says:

    @Michael – It’s not a question of wanting. It’s a question of looking around at the physical laws that govern the universe and noting that, as we understand them, the promises of the Singularity are unlikely. I’d love to travel around in a time-travelling phone box, too. But I know that the physical laws of the universe most likely prevent that.

  11. Paul Raven says:

    … my tone is always crowing, because blogging is too boring otherwise.

    An odd – and, might I suggest, self-defeating? – attitude for a vocal advocate of any philosophy to take, I’d have thought. But to each their own. :)

  12. Athena Andreadis says:

    No worries, Paul — I know your intentions were the best!

    To Rick York: brilliant minds can also go spectacularly astray (I could list dozens, but won’t, we all know the names). Specifically, Penrose’s quantum microtubules concept belongs to the “not even wrong” category. I think that may be due to two factors: physicists do think they can opine about biology from first principles (even though it persists in being messy, dammit, and won’t knuckle down to four-fundamental-forces elegance) and of course QM is almost irresistible as the last refuge of mystics who do not want to be called that. As Jim Oberg said, and à propos the subject at hand, “Keeping an open mind is a virtue, but not so open that your brains fall out.”

  13. Rick York says:

    Athena, thanks for the comment. I was not defending Penrose’s theory so much as I was pointing out how very, very little we know and understand about conscious minds.

    Obviously,there have been extraordinary minds which have gone far astray. Then, there are great minds which refuse to exit their intellectual cages. Poincare would have probably proposed his own Theory of Relativity if he had not stepped back from the precipice. The pieces were all there.

    Einstein looked at the evidence and jumped.

    Age does not necessarily lead to wisdom. The current political carnival in U.S. is ample evidence of that. So, at 67, I claim neither wisdom nor the depth of knowledge you demonstrate.

    Probably my biggest personal intellectual achievement has been to recognize that the more I learn, the less I know. I’m trying to keep it that way.

  14. Michael Anissimov says:

    Alex, but you don’t invoke physical laws, your argument is that your understanding is that we barely understand the brain today, that’s evidence that AI won’t be built for over a million years. (About how long it would take for a galactic civilization to form, assuming colonization ships moving from star to star at around 0.1c, a conservative estimate.)

Leave a Reply

Please note: by commenting on Futurismic you explicitly agree to be bound by the Futurismic Comments Policy!