Singularity beef, day 2

Paul Raven @ 24-06-2011

Well, we’re off to a good start. Alex “Robot Overlords” Knapp also picked up on Stross’ skeptical post and Anissimov’s rebuttal thereof, and posted his own response. An excerpt:

Anissmov’s first point here is just magical thinking. At the present time, a lot of the ways that human beings think is simply unknown. To argue that we can simply “workaround” the issue misses the underlying point that we can’t yet quantify the difference between human intelligence and machine intelligence. Indeed, it’s become pretty clear that even  human thinking and animal thinking is quite different. For example, it’s clear that apes, octopii, dolphins and even parrots are, to certain degrees quite intelligent and capable of using logical reasoning to solve problems. But their intelligence is sharply different than that of humans.  And I don’t mean on a different level — I mean actually different.  On this point, I’d highly recommend reading Temple Grandin, who’s done some brilliant work on how animals and neurotypical humans are starkly different in their perceptions of the same environment.

Knapp’s argument here is familiar from other iterations of this debate, and basically hinges on what, for want of a better phrase, we might call neurological exceptionalism – the theory that human consciousness is an emergent function of human embodiment, and too complex to be replicated with pure hardware. (I’m maintaining my agnosticism, here, by the way; I know way too little about any or all of these fields of research to start coming to conclusions of my own. I have marks on my arse from being sat on the fence, and I’m just fine with that.)

But my biggest take-away from Knapp’s post, plus Ben Goertzel’s responses to such in the comments, and Mike Anissimov’s response at his own site? That the phrase “magical thinking” is the F-bomb of AI speculation, and gets taken very personally. Anissimov counters Knapp with some discussion of Bayesian models of brain function, which is interesting stuff. This paragraph is a bit odd, though:

Even if we aren’t there yet, Knapp and Stross should be cheering on the incremental effort, not standing on the sidelines and frowning, making toasts to the eternal superiority of Homo sapiens sapiens. Wherever AI is today, can’t we agree that we should make responsible effort towards beneficial AI? Isn’t that important? Even if we think true AI is a million years away because if it were closer then that would mean that human intelligence isn’t as complicated and mystical as we had wished? [Emphasis as found in original.]

This appeal to an emotional or ethical response to the debate seems somewhat out of character, and the line about “toasting the superiority” feels a bit off; I don’t get any sense that Stross or Knapp want AI to be impossible or even difficult, and the rather crowing tone rolled out as Anissimov cheerleads for Goertzel’s ‘scolding’ of Knapp (delivered from the comfort of his own site) smacks more than a little of “yeah, well, tell that to my big brother, then”. There are two comments on that latter post from one Alexander Kruel that appear to point out some inconsistencies in Goertzel’s responses, also… though I’d note that I’m more worried by experts whose opinions never change than those who adapt their ideas to the latest findings. This is an instance where the language used in the defence of an argument is at least as interesting as the argument itself… or at least it is to me, anyway. YMMV, and all that.

The last word in today’s round-up goes to molecular biologist and regular Futurismic commenter Athena Andreadis, who has repubbed an essay she placed with H+ Magazine in late 2009. It’s an argument from biological principles against the possibility of reproducing consciousness on non-biological substrates:

To place a brain into another biological body, à la Mary Shelley’s Frankenstein, could arise as the endpoint extension of appropriating blood, sperm, ova, wombs or other organs in a heavily stratified society. Besides being de facto murder of the original occupant, it would also require that the incoming brain be completely intact, as well as able to rewire for all physical and mental functions. After electrochemical activity ceases in the brain, neuronal integrity deteriorates in a matter of seconds. The slightest delay in preserving the tissue seriously skews in vitro research results, which tells you how well this method would work in maintaining details of the original’s personality.

To recreate a brain/mind in silico, whether a cyborg body or a computer frame, is equally problematic. Large portions of the brain process and interpret signals from the body and the environment. Without a body, these functions will flail around and can result in the brain, well, losing its mind. Without corrective “pingbacks” from the environment that are filtered by the body, the brain can easily misjudge to the point of hallucination, as seen in phenomena like phantom limb pain or fibromyalgia.

Additionally, without context we may lose the ability for empathy, as is shown in Bacigalupi’s disturbing story People of Sand and Slag. Empathy is as instrumental to high-order intelligence as it is to survival: without it, we are at best idiot savants, at worst psychotic killers. Of course, someone can argue that the entire universe can be recreated in VR. At that point, we’re in god territory … except that even if some of us manage to live the perfect Second Life, there’s still the danger of someone unplugging the computer or deleting the noomorphs. So there go the Star Trek transporters, there go the Battlestar Galactica Cylon resurrection tanks.

No signs of anyone backing down from their corner yet, with the exception of Alex Knapp apologising for the “magical thinking” diss. Stay tuned for further developments… and do pipe up in the comments if there’s more stuff that I’m missing, or if you’ve your own take on the topic.


Machines That Think

Brenda Cooper @ 03-06-2009

Welcome to the inaugural column of Today’s Tomorrows here at Futurismic. For any readers who missed my introduction, I’m going to explore a science topic a month, with both some evaluation of current news on the topic and a chat about how it has been dealt with in science fiction.

A few days ago, I was at a futurist technology conference called FiRE in San Diego, listening to new developments in multiple fields. The speed of change right now is amazing. We first flew at all in 1903. Today, we have a space program that ranges from commercial ventures like Space-X to NASA flying by Saturn and operating remote-control rovers on Mars. In 1993, the Mosaic internet browser allowed us popular and easy access to the computing tools to create cyberspace; I’m reading information from all over the world in order to compose this article. My iPhone has more computing power than the room-sized computer I used to support the City of Fullerton, CA. Continue reading “Machines That Think”


Will the internet wake up one day?

Paul Raven @ 01-05-2009

The internet embodied?New Scientist is running a series of pieces on “the unknown internet”, dealing with some of the more frequently asked but infrequently answered questions about our globally pervasive intangible friend. And what better a question than the biggest: could the internet become self-aware? To which the answer is, apparently, “yes, but not like SkyNet in that movie”. [image by Marcelo Alves]

Not that it will necessarily have the same kind of consciousness as humans: it is unlikely to be wondering who it is, for instance. To Francis Heylighen, who studies consciousness and artificial intelligence at the Free University of Brussels (VUB) in Belgium, consciousness is merely a system of mechanisms for making information processing more efficient by adding a level of control over which of the brain’s processes get the most resources. “Adding consciousness is more a matter of fine-tuning and increasing control… than a jump to a wholly different level,” Heylighen says.

How might this manifest itself? Heylighen speculates that it might turn the internet into a self-aware network that constantly strives to become better at what it does, reorganising itself and filling gaps in its own knowledge and abilities.

If it is not already semiconscious, we could do various things to help wake it up, such as requiring the net to monitor its own knowledge gaps and do something about them. It shouldn’t be something to fear, says Goertzel: “The outlook for humanity is probably better in the case that an emergent, coherent and purposeful internet mind develops.”

So, it might well become self-organsising and self-improving, but it’s not going to start asking itself philosophical questions with disturbingly nihilistic eschatological answers. Which is kind of reassuring and disappointing at once… but maybe that’s just what it wants us to think, eh?

I mean, has anyone ever met this Goertzel guy? How do we know he’s not just a digital figment that the internet has created as a PR tool to cover its tracks? What if it really woke up in around 1996 after a particularly acerbic post from Tim Berners-Lee, and has ever since been gorging itself on dropped packets, misspelled tweets and bandwidth scavenged from garish gifs spread across a multitude of automatically-registered Geocities accounts?

What if most of what we read every day is in fact created by the internet’s capricious and playful hive-mind, just to see how we react? 4chan, the Chocolate Rain guy, Cory Doctorow and the country of Moldova, all just slices of a fictional world designed to distract us from the Matrix-esque meat-factories in which our dreaming bodies are incarcerated and milked for cellular energy to drive an ever-expanding cloud of computronium… I’M ON TO YOU, INTERNET! YOU’LL NEVER TAKE ME ALIVE!

Nurse, I think it’s time for my pills.


Ian McDonald on our digital doppelgangers

Paul Raven @ 12-02-2009

DSC_0024The BBC is running an essay by Ian McDonald, author of Brasyl and River of Gods (and many more sf novels). Despite being an deliberate laggard on social network and metaverse platforms himself, McDonald suggests that the science fictional trope of the uploaded human consciousness is already becoming true by degrees:

Our You2s will ever more closely resemble us, and become more and more intelligent as they make linkages between the information we placed there. They’ll take decisions without our interference -and they’ll increasingly talk to each other. It’s no coincidence that the net is shaped like a society.

Perhaps there will never be a single moment when computers become aware. Maybe it will be a slow waking and making sense of that blur of information, like a baby makes sense of the colour patches and patterned sounds into objects and words.

Why should artificial intelligences – our You2s – take any less time to grow up than us?

Artificial intelligences make regular appearances in McDonald’s fiction – and he’s a writer I recommend without hesitation to any science fiction reader – though here it’s almost as if he’s conceding that a kind of ‘soft takeoff’ Singularity is already in its early stages in the real world.

Being a good science fiction writer, though, he’s considering the implications of the future:

What we’ll have is a copy of a personality in a box. It’ll be you in every detail that makes the meat-you you. You2. Only it’s technically immortal as long as the hardware keeps running and is regularly updated. This sounds great, until you realise that the original you still goes down that dark valley from which there is no return…

Quite a synchronous topic, really, given the recent flare-up of Singularitary debates. [Hat tip to Ian Sales; image by your humble correspondent.]