Tag Archives: singularitarianism

Stross starts Singularity slapfight

Fetch your popcorn, kids, this one will run for at least week or so in certain circles. Tonight’s challenger in the blue corner, it’s the book-writing bruiser from Edinburgh, Charlie Stross, coming out swinging:

I can’t prove that there isn’t going to be a hard take-off singularity in which a human-equivalent AI rapidly bootstraps itself to de-facto god-hood. Nor can I prove that mind uploading won’t work, or that we are or aren’t living in a simulation. Any of these things would require me to prove the impossibility of a highly complex activity which nobody has really attempted so far.

However, I can make some guesses about their likelihood, and the prospects aren’t good.

And now, dukes held high in the red corner, Mike Anissimov steps into the ring:

I do have to say, this is a novel argument that Stross is forwarding. Haven’t heard that one before. As far as I know, Stross must be one of the only non-religious thinkers who believes human-level AI is impossible. In a literature search I conducted in 2008 looking for academic arguments against human-level AI, I didn’t find much — mainly just Dreyfuss’ What Computers Can’t Do and the people who argued against Kurzweil in Are We Spiritual Machines? “Human level AI is impossible” is one of those ideas that Romantics and non-materialists find appealing emotionally, but backing it up is another matter.

Seriously, I just eat this stuff up – and not least because I’m fascinated by the ways different people approach this sort of debate. Rhetorical fencing lessons, all for free on the internet!

Me, I’m kind of an AI agnostic. I’ve believed for some time now that the AI question one of those debates that can only ever be truly put to rest by a conclusive success; failures only act as intellectual fuel for both sides.

(Though there is a delightfully piquant inversion of stereotypes when one sees a science fiction author being castigated for highlighting what he sees as the implausibility of a classic science fiction trope… and besides, I’d rather have people worrying about how to handle the emergence of a hard-takeoff Singularity than writing contingency plans for a zombie outbreak that will never happen.)

Transcendent Men: is transhumanism ready for its close-up?

So, here’s a little reminder for UK people (plus anyone rockin’ the transAtlantic jet-set lifestyle who has nothing else planned for the weekend) that yours truly is appearing on a panel discussion being held by the UK branch of Humanity+ in London on Saturday. The kick-off topic is Ray Kurzweil’s infomercialmoviebiopic, Transcendent Man, though I expect the focus will wander somewhat. (When you put me on a discussion panel, digression comes as standard… and if I’m still as fuzzed-out with a headcold as I am right now, I may struggle to recall my own name, let alone the subject under discussion. Selah.)

I actually watched Transcendent Man a few weeks back; it wasn’t what I was expecting, to be quite honest. I assumed we’d get a lot of flash-bang technowonder footage running through a ticklist of transhuman ideals, spangly visuals and a trendy post-Noughties electronica soundtrack pumping away underneath; instead, Transcendent Man is surprisingly calm and restrained, focussing as much on Kurzweil himself as it does the movement he’s implicitly placing himself at the vanguard of, if not more. I was pleased to see plenty of dissenting opinions from futurist figureheads like Ben Goertzel (novelty hats!) and Kevin Kelly (novelty beard!), but disappointed that these weren’t addressed more thoroughly – though given the restraints of the feature-film format and the underlying propagandist purpose of the movie, I’m not entirely surprised.

But the big takeaway for me was the framing of Kurzweil as a man chasing immortality technology because he wants to reincarnate his father, a talented composer and musician who died an untimely death; to some extent this humanises Kurzweil and his transhuman yearnings, but also (subtly but quite deliberately, I expect) gives him a kind of Christ-like subtext. Sacrifice and resurrection, the father and the son, the transcendence of base human existence, giving sight to the blind, healing the sick… all very Biblical, in a secular kind of way. Given Kurzweil’s undeniable intelligence and focus on long-term goals, I’m reading Transcendent Man as a very literal text; I think it only reasonable to assume that there’s nothing in there that the man himself didn’t want included. He’s a shrewd publicist, and understands the power of narrative; the narrative here is much more about Kurzweil himself than H+ as a movement, but it also seeks to make the connection between the two an explicit one: Kurzweil sees himself as instigator and leader of a crusade to conquer death itself.

Of course, that’s my reading of it, which is – quite naturally – informed by my own sceptical-fellow-traveller status, and I look forward to finding out what confirmed H+ adherents have taken from it. An early taster can be found over at H+ Magazine from none other than R U Sirius:

Transcendent Man is not exactly a portrait of Ray Kurzweil, although there is some of that. And it’s not exactly an exploration of his ideas, although there is some of that too. It’s a portrait of a man on a mission — the person and the message inextricably linked together — and it leaves a viewer with the strong impression that the man is the mission. The film carries, over all, a rather somber ambiance, a feeling that is helped along by a disquieting original soundtrack by Philip Glass. There are lots of shots of Ray popping vitamin and nutrient pills; speaking in public, pontificating on his theories. All this is coupled with his — and his mother’s — memories about the death of his father, which seems to be a mission-defining trauma at the heart of his quest. And there are a fair number of talking heads supporting or criticizing Ray’s visions, including Kevin Kelly characterizing Ray as a prophet… “but wrong.” In a quiet moment, Ray appears to be deeply and sadly reflecting on something as he gazes out at the ocean. A voice off camera asks him what he’s thinking about. He hesitates for quite a few beats before saying (I’m paraphrasing) that he was thinking about the computational complexity of the natural world. A few seconds later, he says something that rings more true — that he always finds the ocean soothing. (So do I.)

(Interestingly enough, the scene Sirius mentions there was the one that felt to me the most staged and false, as if Kurzweil knew he needed to expose his emotional core but struggled to do so with authenticity… which isn’t to suggest he was faking it so much as he was perhaps struggling to let go of the incredible degree of self-control he imposes – by necessity – upon himself.)

The film will probably not leave most viewers with a visceral impression of an energized life full of joy and companionship — the one exception is toward the end of the film when Ray is part of a group that gets to experience zero gravity. We see an expression of pure happiness wash over Ray’s face and notice a real sense of bonhomie among all the participants. But on the whole, a cynic might see in this film a portrait of a life lived in pursuit of more life.

Sirius hits it on the nose for me, here; I came away from Transcendent Man with an image of Kurzweil as a man so driven that he can no longer extricate his life from his desire to extend said life, a kind of tragic Sisyphean figure. I fully expect someone more convinced by the Singularitarian schedule would read his character very differently, though; how the everyman public reads it remains to be seen (assuming it makes enough of a splash that anyone who isn’t already H+-curious bothers to check it out – it doesn’t exactly drip with box-office blockbuster potential).

Indeed, the reason I expected a more dynamic and exciting experience from Transcendent Man is that I assumed it was intended as a vehicle for popularising the H+ movement beyond its current main catchment zone (which is predominantly affluent white Western males with technological backgrounds). I’ve spent the last four or five years watching H+ memes pop up in pop-culture niches, and I’m now beginning to wonder if Transcendent Man is designed to publicly define the core ideals of an concept that has already started to metastasise and mutate its way through the body politic – not just a statement of ownership, but an attempt to build a canonical “party line”, if you like. What I’m certain of is that the H+/Singularitarian memes are spreading, and that these troubled times are rich loam for the seeds of any transcendent philosophy. Furthermore, it’s a philosophy that can easily be hijacked, remixed and radicalised (transhuman separatism, anyone?), and I suspect Kurzweil can see that coming, too; whether he’ll succeed in becoming the official figurehead for the “classical” core of the movement (and whether that will be an enviable position to be in) remains an open question.

Forbes acquires cigar-chompin’ H+ blogger

They’re coming up like crocuses in the park: thanks to Mike Anissimov, we find that Forbes is the latest mainstream news outlet to hire on a blogger for the transhumanist/disruptive-tech/speculative-futures beat, in the form of Alex Knapp (who may not actually chomp cigars with any regularity at all, but hey: give yourself a masthead mugshot like that, and people are gonna jump to conclusions).

“Great, another naive singularitarian with a blog,” you may be thinking. “Like we need more of those, AMIRITEZ?” Well, give the guy a chance – looks to me like he’s going to be a lot less starry-eyed than some of the transhuman (ir)regulars, as this post responding to an H+ Magazine piece demonstrates:

The article goes on […] speculating the ways in which an advanced artificial intelligence might lower cancer risks or even develop alternative forms of energy.  But of course, nowhere does the article discuss how such an intelligence might be developed.  Nowhere does it discuss how you get from artificial general intelligence to the ability to model complex systems.  Nor does it discuss the limitations of such modeling.  No mention is made of potential drawbacks, technological failures, or anything.  It’s pure fantasy, masquerading as a serious proposal because it has a veneer of technology to it.

But frankly, you can show the reliance on magical thinking with just a few quick word changes.  For example, I’m going to change the title of the article to “Could Djinn Prevent Future Nuclear Disasters?”, then make just a handful of word changes to the paragraphs quoted:

“What is really needed, to prevent being taken unawares by “freak situations” like what we’re seeing in Japan, is a radically lower-cost way of evaluating the likely behaviors of our technological constructs in various situations, including those judged plausible but unlikely (like a magnitude 9 earthquake). Due to the specialized nature of technological constructs like nuclear reactors, however, this is a difficult requirement to fulfill using human labor alone. It would appear that finding magic lamps that hold Djinn has significant potential to improve the situation.

A Djinn would have been able to take the time to simulate the behavior of Japanese nuclear reactors in the case of large earthquakes, tidal waves, etc. Such simulations would have very likely led to improved reactor designs, avoiding this recent calamity plus many other possible ones that we haven’t seen yet (but may see in the future).”

I could, in fact, go through the entire article, replacing “AGI” with “Djinn” and a few other tweaks for consistency and not change the meaning of the article one iota.  Now to be fair, I don’t know if this author has grappled with these technological issues elsewhere, but as far as this article is concerned, wishing for a Commander Data or Stephen Byerley has about as much credence as wishing for a Djinn.  It’s simply not a practical solution for the moment.

I like him already!

[ A note to other editors looking to expand their stable of blogs with a soupçon of futurism and H+: this gun’s for hire, folks. *waves* ]

Schismatic transhuman sects

Ah, more fuel for my puny brain-engine as it flails desperately to put together a coherent position for the H+ UK panel in April. Having already set myself up as a fellow-traveller/fence-sitter, the landscape surrounding the “transhumanist movement” is slowly revealing itself, as if the “fog of war” were lifting in some intellectual real-time-strategy game. What is increasingly plain is that there is no coherent “transhumanist movement”, and that this incoherence will increase – as entropy always does – under the grow-lamps of international media attention, controversies (manufactured and actual), radically perpendicular or oppositional philosophies and bandwaggoning Jenny-come-latelys. In short, interesting times.

For instance: the Transhuman Separatist Manifesto, which prompted a swift counterargument against transhuman militance. A co-author of the former attempts to clarify the manifesto’s position:

We Transhuman Separatists define ourselves as Transhuman. Other Transhumanist schools of thought view H+ as a field of study. While I am fascinated by the field of Transhumanism, I would argue that H+ is most fundamentally a lifestyle — not a trend or a subculture, but a mode of existence. We are biologically human, but we share a common understanding and know that we are beyond human. We Transhuman Separatists are interested in making this distinction through separation.

Do we wish to form a Transhumanist army, and kill the humans who aren’t on our level? My answer here is an obvious no. Do we advocate Second Amendment rights? Absolutely. If anyone attempted to kill me for being weird, I would need to be able to defend myself. There may not currently be people out there who are killing anyone who is H+, but stranger things have happened in our society. If nobody was to attack us, we would not commit violence against anyone. We have no desire to attack the innocent.

I think there is a class distinction in the H+ community. Those of us in the lower/working classes have been through a lot of horrible experiences that those of us in the middle/upper classes might be unable to understand. We have our own form of elitism, which is related to survival, and many of us feel the need for militance. We feel like we have become stronger through our trials and tribulations. Think of us as Nietzschean Futurists. Our goal is to separate from the human herd and use modern technology to do it.

When Haywire claims that transhuman separatism is merely a desire to escape the tyranny of biology, I believe hir. I also know very well – as I expect zhe does, even if only at a subconscious level – that not everyone will see it that way. The most important word in those three paragraphs is the opening “we”; it’s the self-identification of a group that are already aware their goals will set them aside from (and quite possibly at ideological opposition to) a significant chunk of the human species. They may not desire militancy, but it will be thrust upon them.

More interesting still is the way the transhumanist meme can cross social barriers you’d not expect it to. Did you know there was a Mormon Transhumanist Association? Well, there is [via TechnOcculT and Justin Pickard]; here’s some bits from their manifesto:

  1. We seek the spiritual and physical exaltation of individuals and their anatomies, as well as communities and their environments, according to their wills, desires and laws, to the extent they are not oppressive.
  2. We believe that scientific knowledge and technological power are among the means ordained of God to enable such exaltation, including realization of diverse prophetic visions of transfiguration, immortality, resurrection, renewal of this world, and the discovery and creation of worlds without end.
  3. We feel a duty to use science and technology according to wisdom and inspiration, to identify and prepare for risks and responsibilities associated with future advances, and to persuade others to do likewise.

So much for the notion of transhumanism as an inherently rationalist/atheist position, hmm? (Though I’d rather have the Mormons dabbling in transhumanism than the evangelicals; the thought of a hegemonising swarm of cyborg warriors-in-Jeebus is not a particularly cheery one for anyone outside said swarm.)

And let’s not forget the oppositional philosophies. For example, think of Primitivism as Hair-shirt Green taken to its ultimate ideological conclusion: planet screwed, resources finite and dwindling, civilisation ineluctably doomed, resistance is futile, go-go hunter-gatherer.

The aforementioned Justin Pickard suggested to me a while back that new political axes may be emerging to challenge or counterbalance (or possibly just augment) the tired Left-Right dichotomy, and that one of those axes might be best labelled as [Bioconservative<–>Progressive]; Primitivism and Militant Transhumanist Separatism have just provided the data points between which we might draw the first rough plot of that axis, but there’ll be more to come, and soon.

[$mind]!=[$computer]: why uploading your brain probably won’t happen

Via Science Not Fiction, here’s one Timothy B Lee taking down that cornerstone of Singularitarianism, the uploading of minds to digital substrates. How can we hope to reverse-engineer something that wasn’t engineered in the first place?

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.

As discussed before, I rather think that mind simulation – much like its related discipline, general artificial intelligence – is one of those things whose possibility will only be resolved by its achievement (or lack thereof). Which, come to think of it, might explain the somewhat theological flavour of the discourse around it…