Tag Archives: transcendence

Schismatic transhuman sects

Ah, more fuel for my puny brain-engine as it flails desperately to put together a coherent position for the H+ UK panel in April. Having already set myself up as a fellow-traveller/fence-sitter, the landscape surrounding the “transhumanist movement” is slowly revealing itself, as if the “fog of war” were lifting in some intellectual real-time-strategy game. What is increasingly plain is that there is no coherent “transhumanist movement”, and that this incoherence will increase – as entropy always does – under the grow-lamps of international media attention, controversies (manufactured and actual), radically perpendicular or oppositional philosophies and bandwaggoning Jenny-come-latelys. In short, interesting times.

For instance: the Transhuman Separatist Manifesto, which prompted a swift counterargument against transhuman militance. A co-author of the former attempts to clarify the manifesto’s position:

We Transhuman Separatists define ourselves as Transhuman. Other Transhumanist schools of thought view H+ as a field of study. While I am fascinated by the field of Transhumanism, I would argue that H+ is most fundamentally a lifestyle — not a trend or a subculture, but a mode of existence. We are biologically human, but we share a common understanding and know that we are beyond human. We Transhuman Separatists are interested in making this distinction through separation.

Do we wish to form a Transhumanist army, and kill the humans who aren’t on our level? My answer here is an obvious no. Do we advocate Second Amendment rights? Absolutely. If anyone attempted to kill me for being weird, I would need to be able to defend myself. There may not currently be people out there who are killing anyone who is H+, but stranger things have happened in our society. If nobody was to attack us, we would not commit violence against anyone. We have no desire to attack the innocent.

I think there is a class distinction in the H+ community. Those of us in the lower/working classes have been through a lot of horrible experiences that those of us in the middle/upper classes might be unable to understand. We have our own form of elitism, which is related to survival, and many of us feel the need for militance. We feel like we have become stronger through our trials and tribulations. Think of us as Nietzschean Futurists. Our goal is to separate from the human herd and use modern technology to do it.

When Haywire claims that transhuman separatism is merely a desire to escape the tyranny of biology, I believe hir. I also know very well – as I expect zhe does, even if only at a subconscious level – that not everyone will see it that way. The most important word in those three paragraphs is the opening “we”; it’s the self-identification of a group that are already aware their goals will set them aside from (and quite possibly at ideological opposition to) a significant chunk of the human species. They may not desire militancy, but it will be thrust upon them.

More interesting still is the way the transhumanist meme can cross social barriers you’d not expect it to. Did you know there was a Mormon Transhumanist Association? Well, there is [via TechnOcculT and Justin Pickard]; here’s some bits from their manifesto:

  1. We seek the spiritual and physical exaltation of individuals and their anatomies, as well as communities and their environments, according to their wills, desires and laws, to the extent they are not oppressive.
  2. We believe that scientific knowledge and technological power are among the means ordained of God to enable such exaltation, including realization of diverse prophetic visions of transfiguration, immortality, resurrection, renewal of this world, and the discovery and creation of worlds without end.
  3. We feel a duty to use science and technology according to wisdom and inspiration, to identify and prepare for risks and responsibilities associated with future advances, and to persuade others to do likewise.

So much for the notion of transhumanism as an inherently rationalist/atheist position, hmm? (Though I’d rather have the Mormons dabbling in transhumanism than the evangelicals; the thought of a hegemonising swarm of cyborg warriors-in-Jeebus is not a particularly cheery one for anyone outside said swarm.)

And let’s not forget the oppositional philosophies. For example, think of Primitivism as Hair-shirt Green taken to its ultimate ideological conclusion: planet screwed, resources finite and dwindling, civilisation ineluctably doomed, resistance is futile, go-go hunter-gatherer.

The aforementioned Justin Pickard suggested to me a while back that new political axes may be emerging to challenge or counterbalance (or possibly just augment) the tired Left-Right dichotomy, and that one of those axes might be best labelled as [Bioconservative<–>Progressive]; Primitivism and Militant Transhumanist Separatism have just provided the data points between which we might draw the first rough plot of that axis, but there’ll be more to come, and soon.

Nerd rapture, redux: Annalee Newitz on why the Singularity ain’t gonna save us

Well, this should infuriate the usual suspects (and provoke more measured and considered responses from a few others). io9 ed-in-chief Annalee Newitz steps up to the plate to lay the smackdown on the Singularity as glorious transcendent happily-ever-after eschaton:

Though it’s easy to parody the poor guy who talked about potato chips after the Singularity, his faith is emblematic of Singulatarian beliefs. Many scientifically-minded people believe the Singularity is a time in the future when human civilization will be completely transformed by technologies, specifically A.I. and machines that can control matter at an atomic level (for a full definition of what I mean by the Singularity, read my backgrounder on it). The problem with this idea is that it’s a completely unrealistic view of how technology changes everyday life.

Case in point: Penicillin. Discovered because of advances in biology, and refined through advances in biotechnology, this drug cured many diseases that had been killing people for centuries. It was in every sense of the term a Singularity-level technology. And yet in the long term, it wound up leaving us just as vulnerable to disease. Bacteria mutated, creating nastier infections than we’ve ever seen before. Now we’re turning to pro-biotics rather than anti-biotics; we’re investigating gene therapies to surmount the troubles we’ve created by massively deploying penicillin and its derivatives.

That is how Singularity-level technologies work in real life. They solve dire problems, sure. They save lives. But they also create problems we’d never imagined – problems that might have been inconceivable before that Singularity tech was invented.

What I’m saying is that the potato chip won’t taste better after the Singularity because the future isn’t the present on steroids. The future is a mutated bacteria that you never saw coming.

Newitz’s point here, as I understand it, isn’t that technological leaps won’t occur; it’s that those leaps will come with the same sorts of baggage and side-effects that every other technological leap in history has carried with it. The more serious transhumanist commentators will doubtless make the point that they’ve been trying to curb this blue-sky tendency (and kudos to them for doing so), but they’re struggling against a very old human habit – namely the projection of utopian longing onto a future that’s assumed to be transformed by some more-than-human agency.

The more traditional agency of choice has been the local version of the godhead, but technology has usurped its place in the post-theistic classes of the developed world by glomming on to the same psychological yearnings… which is why the Ken MacLeod-coined “Rapture of the Nerds” dig is well-earned in many cases. The more blindly optimistic someone is about “the Singularity” solving all human problems in a blinding flash of transcendence, the less critical thought they tend to have given to what they’re talking about*; faith isn’t necessarily blind, but it has a definite tendency toward myopia, and theists hold no monopoly on that.

Newitz closes out with the following:

All I’m saying is that if you’re looking for a narrative that explains the future, consider this: Does the narrative promise you things that sound like religion? A world where today’s problems are fixed, but no new problems have arisen? A world where human history is irrelevant? If yes, then you’re in the fog of Singularity thinking.

But if that narrative deals with consequences, complications, and many possible outcomes, then you’re getting closer to something like a potential truth. It may not be as tasty as potato chips, but it’s what we’ve got. Might as well get ready for the mutation to begin.

Amen, sister. 🙂

[ * I fully include myself in this castigation; when I started writing for Futurismic, I was a naive and uncritical regurgitator of received wisdoms, though I like to think I’ve moved on somewhat since then. ]

Personality back-ups: immortality through avatars?

The possibility of digitising the human mind is one of those questions that will only be closed by its successful achievement, I think; there’ll always be an argument for its possibility, because the only way to disprove it would be to quantify how personality and mind actually work, and if we could quantify it, we could probably work out a way to digitise it, too. (That said, if someone can chop a hole in my logic train there, I’d be genuinely very grateful to them, because it’s a question that’s bugged me for years, and I haven’t been able to get beyond that point with my bootstrap philosophy chops.)

Philosophical digressions aside, low-grade not-quite-proof-of-concept stuff seems to be the current state of the industry. Via NextNature, New Scientist discusses a few companies trying to capture human personality in computer software:

Lifenaut’s avatar might appear to respond like a human, but how do you get it to resemble you? The only way is to teach it about yourself. This personality upload is a laborious process. The first stage involves rating some 480 statements such as “I like to please others” and “I sympathise with the homeless”, according to how accurately they reflect my feelings. Having done this, I am then asked to upload items such as diary entries, and photos and video tagged with place names, dates and keywords to help my avatar build up “memories”. I also spend hours in conversation with other Lifenaut avatars, which my avatar learns from. This supposedly provides “Linda” with my mannerisms – the way I greet people or respond to questions, say – as well as more about my views, likes and dislikes.

A more sophisticated series of personality questionnaires is being used by a related project called CyBeRev. The project’s users work their way through thousands of questions developed by the American sociologist William Sims Bainbridge as a means of archiving the mind. Unlike traditional personality questionnaires, part of the process involves trying to capture users’ values, beliefs, hopes and goals by asking them to imagine the world a century in the future. It isn’t a quick process: “If you spent an hour a day answering questions, it would take five years to complete them all,” says Lori Rhodes of the nonprofit Terasem Movement, which funds CyBeRev. “But the further you go, the more accurate a representation of yourself the mind file will become.”

It’s an interesting article, so go take a look. This little bit got me thinking:

So is it possible to endow my digital double with a believable representation of my own personality? Carpenter admits that in order to become truly like you, a Lifenaut avatar would probably need a lifetime’s worth of conversations with you.

Is that a tacit admission that who we are, at a fundamental level, is a function of everything we’ve ever done and experienced? That to record a lifetime’s worth of experiences and influences would necessarily take a lifetime? Emotionally, I find myself responding to that idea as being self-evident… and it’s the intuitive nature of my response that tells me I should continue to question it.