Tag Archives: Singularity

Charlie Stross’ 21st Century crystal ball

Charles StrossIf you’ve not caught it already, you should get over to Charlie Stross’s blog and check out his 21st Century FAQ; it’s your source of rant fodder for the coming week.

For example, in answer to the question “[w]hich of (Socialism | Capitalism | Libertarianism | Fascism | Democracy) is going to save us?”:

We’re still waiting for the definitive ideological polarity of the internet era to emerge, although Bruce Schneier has opined that the key political hot potato of the 21st century will be the question, “how do we maintain the concept of privacy in an age of ubiquitous communications and surveillance”, and some believe that privacy is already dead. Given the way Moore’s Law is taking us towards an essentially unlimited ability to record everything, I’m not able to argue with the inevitability of surveillance: what I’d dispute is the morality of it.

Responses and counter-arguments are cropping up already, naturally enough; for example, here’s Brian Wang refuting Stross’s claim that space colonisation and the Singularity are non-starters:

We know we can send people into interplanetary space for several days (Apollo). We could easily make the trip to Mars in days [using the Orion nuclear rocket configuration] and then onto to Jupiter in days. We could bring supplies, radiation protection in cargo that is equivalent to several great pyramids or how many loaded aircraft carriers equivalents.

Plenty of material for discussion for the more geeky water-cooler meet-ups. [image by Patrick Nielsen-Hayden]

So, do we reckon Charlie Stross is a fox or a hedgehog?

Ray Kurzweil: the Movie

Via George Dvorsky, here’s the trailer for Transcendent Man, the forthcoming film about the life and work of Ray Kurzweil:

I’m pretty convinced that Kurzweil actually believes what he says, though only time will tell whether he’s right or not. However, this trailer doesn’t do much to disrupt Kurzweil’s image as a kind of pseudo-religious techno-prophet; disengaging from the subject matter and looking purely at the language and framing, it seems to set him up as a misunderstood Messiah, and that tends to fire up my instinctive BS detector much more than speculations on the developmental curve of technology.

What’s your take on Kurweil – deluded crank or visionary genius? Or something in between?

Ian McDonald on our digital doppelgangers

DSC_0024The BBC is running an essay by Ian McDonald, author of Brasyl and River of Gods (and many more sf novels). Despite being an deliberate laggard on social network and metaverse platforms himself, McDonald suggests that the science fictional trope of the uploaded human consciousness is already becoming true by degrees:

Our You2s will ever more closely resemble us, and become more and more intelligent as they make linkages between the information we placed there. They’ll take decisions without our interference -and they’ll increasingly talk to each other. It’s no coincidence that the net is shaped like a society.

Perhaps there will never be a single moment when computers become aware. Maybe it will be a slow waking and making sense of that blur of information, like a baby makes sense of the colour patches and patterned sounds into objects and words.

Why should artificial intelligences – our You2s – take any less time to grow up than us?

Artificial intelligences make regular appearances in McDonald’s fiction – and he’s a writer I recommend without hesitation to any science fiction reader – though here it’s almost as if he’s conceding that a kind of ‘soft takeoff’ Singularity is already in its early stages in the real world.

Being a good science fiction writer, though, he’s considering the implications of the future:

What we’ll have is a copy of a personality in a box. It’ll be you in every detail that makes the meat-you you. You2. Only it’s technically immortal as long as the hardware keeps running and is regularly updated. This sounds great, until you realise that the original you still goes down that dark valley from which there is no return…

Quite a synchronous topic, really, given the recent flare-up of Singularitary debates. [Hat tip to Ian Sales; image by your humble correspondent.]

The three schools of Singularitarianism

Ray KurzweilThe announcement of Ray Kurzweil’s Singularity University project (and the inevitable backlash against it) has people talking about the S-word again… much to the ire of transhumanist thinkers like Michael Anissimov, who points out that there are three competing ‘schools’ of thinking about the Singularity, each of which hinges on a different interpretation of a word that has, as a result, lost any useful meaning.

The “Accelerating Change” school is probably the closest to Kurzweil’s own philosophy, but it is also Kurzweil’s quasi-religious presentation style (not to mention judicious hand-waving and fact-fudging) that makes it the easiest to attack. [image by null0]

Anissimov finds himself closer to the “Event Horizon” and “Intelligence Explosion” schools:

These other schools point to the unique transformative power of superintelligence as a discrete technological milestone. Is technology speeding up, slowing down, staying still, or moving sideways? Doesn’t matter — the creation of superintelligence would have a huge impact no matter what the rest of technology is doing. To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.

That may not actually sound any more reassuring than Kurzweil’s exponential curve of change to many people – if not even less so. And with good reason:

That’s the thing about superintelligence that so offends human sensibilities. Its creation would mean that we’re no longer the primary force of influence on our world or light cone. Its funny how people then make the non sequitur that our lack of primacy would immediately mean our subjugation or general unhappiness. This comes from thousands of years of cultural experience of tribes constantly killing each other. Fortunately, superintelligence need not have the crude Darwinian psychology of every organism crafted by biological evolution, so such assumptions do not hold in all cases. Of course, superintelligence might be created with just that selfish psychology, in which case we would likely be destroyed before we even knew what happened. Prolonged wars between beings of qualitatively different processing speeds and intelligence levels is science fiction, not reality.

Superintelligence sounds like a bit of a gamble, then… which is exactly why its proponents suggest we need to study it more vigorously so that – when the inevitable happens – we’re not annihilated by our own creations.

But what’s of relevance here is the sudden attempts by a number of transhumanist and Singularitarian thinkers to distance themselves from Kurzweil’s PT Barnum schtick in search of greater respectability for their less sensationalist ideas. Philosophical schisms have a historical tendency to become messy; while I don’t expect this one to result in bloodshed (although one can’t completely rule out some Strossian techno-jihad played out in near-Earth Orbit a hundred years hence), I think we can expect some heated debate in months to come.

Singularitarianism 101: What’s the point of uploading your mind?

exploding mind statueTranshumanist thinker Michael Anissimov has decided to attempt answering the question that almost everybody asks about the the idea of universal mind uploading – namely, why the hell would we want to do it?

His seven reasons include economic growth (topical), greater subjective well-being and environmental recovery, but the one that will probably surprise most of all is his suggestion that mind uploading would forge closer connections with other humans:

Our interactions with other people today is limited by the very low bandwidth of human speech and facial expressions. By offering partial readouts of our cognitive state to others, we could engage in a deeper exchange of ideas and emotions. I predict that “talking” as communication will become passé — we’ll engage in much deeper forms of informational and emotional exchange that will make the talking and facial expressions of today seem downright empty and soulless.

It all sounds a bit like a Greg Egan novel, doesn’t it? Personally, I’m first in the queue for upload (assuming it becomes possible within my lifetime), as I find corporeal existence to be massively distracting – I could get so much more done if I didn’t have this bag of meat to worry about… [image by Alex // Berlin]