Tag Archives: transhumanism

The Shameful Joys of Deus Ex: Human Revolutions


  1. Context, Dear Boy… Context

Here is a common complaint:

‘One of the problems facing video game writing is a systemic failure to place games in their correct historical context’

What this generally means is that writers fail to open their reviews with a lengthy diatribe on the history of this or that genre. While I think that there is definitely a place for that type of opening and am quite partial to it myself, I think that the real problem of context is far more local and far less high-minded. The true problem of context is that how you experience a particular video game is likely to be determined by the games you played immediately before. For example, if you move from playing one version of Civilization to the next then the thing that is most likely stand out is the developers’ latest fine-tuning of the game’s basic formula. Conversely, if you pick up Civilization V after Europa Universalis III, you will most likely be struck by the weakness of the AI and the lack of control you have over your own economy. Aesthetic reactions, like all reactions, are highly contextual. This much was evident in the reaction to Eidos Montreal’s recent reboot of the Deus Ex franchise entitled Deus Ex: Human Revolution. Continue reading The Shameful Joys of Deus Ex: Human Revolutions

The Outrigger Diaspora

I’ve filed the 100 Year Starship symposium in the steadily swelling folder of “events that make me wish I was located in the States, or that telepresence was a bit more stable and functional”. Athena Andreadis was one of the many speakers, and I’ll look forward to the full version of her talk appearing in the Journal of the British Interplanetary Society, an organisation I should really get round to joining. In the meantime, here’s a few snippets from her post-mortem blog post that sum up why I pay attention to her:

… there is still no firm sense of limits and limitations.  This persistence of triumphalism may doom the effort: if we launch starships, whether of exploration or settlement, they won’t be conquerors; they will be worse off than the Polynesians on their catamarans, the losses will be heavy and their state at planetfall won’t resemble anything depicted in Hollywood SF.

Yes, this. Look, I’m as much a sucker for the macroarchitectural fantasies of high space opera as the next person, but it only takes a cursory knowledge of the space programs we’ve had so far to know that the universe beyond the gravity well is, to quote Phil Anselmo, fucking hostile. And that’s before you even start thinking about agressive (or simply defensive) alien civilisations or dangerous new biomes inimical to human life-chemistry. We live on a merest mere speck in an ocean of infinite breadth and depth; to set out without a respectful fear of that infinitude is pure hubris.

Like building a great cathedral, it will take generations of steady yet focused effort to build a functional starship.  It will also require a significant shift of our outlook if we want to have any chance of success.  Both the effort and its outcome will change us irrevocably.

This is pretty much how I’m looking at transhumanism these days; long-range goals are great things to have, but they’re no substitute for realistic and achievable steps along the route of progress. And I’m not sure that we can simply focus on the technological side of things and assume that great strides there will solve our social and economic issues as a by-product; technology and science cannot be discretely set aside from our other projects. The out-bound diaspora of humanity is dependent on its surviving the next handful of decades, and most of the hazards attendant on that timeframe will not be solved by technology alone.

In effect, by sending out long-term planetary expeditions, we will create aliens more surely than by leaving trash on an uninhabited planet.  Our first alien encounter, beyond Earth just as it was on Earth, may be with ourselves viewed through the distorting mirror of divergent evolution.

I read the Strugatsky Brothers’ Roadside Picnic earlier this year, and it struck me then that the planet is already spattered with Zones. But they’re Zones where the Other that left them is Us: they’re the squatter shanties and favelas that have accreted around big cities in the developing world, where the people pick over the stuff we’ve just tossed aside in favour of the next new shiny, where they make do with the cast-offs of yesteryear, and where the street finds new uses for things. A world full of aliens biologically identical to us: plenty of opportunities to practice – if we’re willing – the diplomacy skill-sets we’ll need, should we ever happen across another species on the infinite ocean.


Singularity Summit 2011, New York City

Here’s a heads-up from Mike Anissimov in his capacity as publicity person for the Singularity Institute: said institution is putting on a two-day Singularity Summit in New York in the middle of next month, 15th and 16th of October.

The topical focus is on robots and A.I., in particular that headline-grabbing Jeopardy! victory by IBM’s Watson supercomputer earlier in the year. Ray Kurzweil will be the opening keynote speaker, and others on the roster of big-name boffins include Stephen Wolfram, David Brin, Peter Thiel, Eliezer Yudkowsky and Jaan Tallinn. A more complete breakdown and running order can be found over at H+ Magazine (check the link above).

So, yeah; it’s a bit of a sausage-fest of a line-up, isn’t it? My own chances of attending are at root precluded by my being located on entirely the wrong side of the Atlantic, but if you fancy a weekend with the Big Brainy Boys of Transhumanist Boffinry, tickets are available at the rather terrifying price of US$350 for single day access or a bargain US$585 for both days.

Yeah, I know. How many day’s worth of off-brand metabolic longevity diet supplements would that cover, I wonder? < /snark>


Now this is just wonderful, even if it’s a clear response to the start of a long (but maybe not so slow) ramping down from the current consumer-driven innovation model of technology business:

The notion of a “haberdashery for technology” came from traditional haberdasheries which are (or, more often than not, were) filled with knitting needles, sewing machines, patterns, buttons, thread and examples of clothes, bags and quilts that you can make yourself. They tend to have shop assistants who are experts at their craft, as opposed to general salespeople, and they give you advice and host classes to learn new sewing skills.

Hirschmann explains: “Now replace all of that with LEDs, circuit boards, soldering irons and lots of lovely little drawers with resistors, capacitors and switches The store is immaculately organised and there are explanations of the bits and bobs near all of the components to help demystify what they do and how they might be useful. There are a selection of bespoke DIY kits for you to explore at home.

Operations like this are a heartening sign, but the ones that last the course will probably be a little less worthy and a lot more ramshackle, much more along the lines of a “bring yer thing and fix it yerself then pay me for the parts” sort of place, a free hackspace that both monetises and entices its meattraffic with the same supplementary offering.

This sort of high-functioning ‘adaptive reuse as business model’ thing is an inevitable necessity for a world with low incomes and limited resources, really… but it’s not a new thing, though: think back not too far to the days when you might have a door-to-door knife-sharpening guy come round the neighbourhood once a season, for instance. As much as we talk about our technologies as being tools, we don’t value them like a really good tool is valued, like a good knife would be sharpened regularly all throughout its long working life. We think of “tools” as being almost a commodity concept nowadays; a word like “power”, “bandwidth”, “leverage”. “Tools” is just our ease of access to Stuff That Does Things, it’s our ability to buy or rent or borrow what we need when we need it.

That ability will cease to pertain in the realm of physical meatspace tools very quickly. This means good tools – well made, well used, maintained and cared for, stored properly – will become valuable social capital in a post-growth economy: an opportunity to contribute rather than a lever for power. Also: the return of the freelance artisan and jack’ll-fix-it, available in both static/urban and nomadic/rural models. Every block or village will have a guy who sysops for local businesses, f’rinstance, and probably another dude who handles the hardware side of things; less glamorously (but equally essentially), you’ll have white-hat infrastructure hackers, people who can patch a local power grid, keep water and sewage systems running, repair or demolish problem architecture… and again, none of this is new. Indeed, it’s current in any major city with a sizeable favela population.

Your city may not have any favelas right now, of course. But it will.

Further weird signals form the nearest strange attractors: some guy hustles Mercedes into sponsoring his prosthetic hand [via MetaFilter]. That’s a novel in nine words, right there, and it’s not even a made-up story. Related: the guy who swapped out his glass eye for  a little digicam [via ModeledBehaviour]. These are just two of real on-the-ground transhumanism’s many, many faces; there will be more of them to come. The two greatest mistakes one can make about transhumanism are falling for the Kurzweilian corporate Singularity fantasy (which I increasingly suspect portrays only the parts of the future reserved for shareholders), or assuming that the ludicrousness of said Singularity fantasy invalidates or derails the existence of an observable and growing subculture. (Confession time: I’ve been guilty of both before now.)

To put it another way: we won’t be uploading our minds any time soon, but there’s more unexpected-consequences-of-being-cyborgs in the very near future of our species, without a doubt… because another of those new artisan careers will be the bodysculptor, the back-street surgeon, and they will not be short of work (even if most of it will be elective or cosmetic rather than… functional, shall we say.)

At this point someone is sure to be thinking “but to do that to yourself would be genuinely insane – like, actual pathology craziness!” You’re probably right, too. I think the problem with dismissing the more extreme examples of the transhumanist urge (no matter how shallowly understood it appears to be in each participant) as mental pathology is that doing so is a convenient way of avoiding the need to address the real problem: what’s causing that craziness, and how prevalent is it? The second question is probably the least important, because it’s the one that’ll answer itself very quickly. The answer to the first will be something already embedded deep enough in the body of our civilisation that its removal would kill or cripple us: it is technology itself, and the madness of kids trying to become the Terminator is the madness of a body trying to remake itself in an image more like the ones it dreams of.

It is the madness of being young in a mad world, and it will not be cured or engineered away.

Uplift ethics, round two

Unsurprisingly, there are some responses to my screed from yesterday on the ethics of animal uplift. First up is George Dvorsky’s riposte:

First, when I talk about the “same cognitive gifts that we have,” I am not necessarily suggesting that we humanize non-human animals—though I concede that some human characteristics, such as the capacity for speech and complex recursive language, are important augmentations. More accurately, I am discussing animal uplift in the context of the broader thrust that sees not just humans move away from the Darwinian paradigm, but the entire ecosystem itself. I realize that’s not a small or subtle thing, but eventually our entire planet’s biosphere will come under the auspices of intelligent oversight—what in some circles has been referred to as technogaianism. We are poised to systematically replace a number of autonomous environmental and evolutionary systems with new and improved ones that will see a dramatic reduction in global suffering and a much more vibrant planet. And quite obviously it’ll also be part of our efforts to fix the damage we’ve done thus far to Earth. So, when I talk about enhancing animals, I’m talking about bringing them into the postbiological fold along with us. To just leave the animal kingdom alone to fend for itself seems plain wrong and repugnant to me.

Well, OK, technogaianism seems like an idea I can acknowledge as a net good, but Dvorsky’s confidence in its imminence seems undimmed by the fact that we don’t currently have a global political framework that can ensure every human being gets their fair share of available resources and a say in how things are run. Hell, in a lot of places, that isn’t even available locally – just look at the current (and growing) schism between the political classes and the general populace in Europe and the US at the moment. You think you’re going to be able to set up a global technological framework for regulating the biosphere with even a simple majority consent from the population, given how difficult it is trying to convince people that as blindingly obvious a problem as anthropic global warming is worth taking action for? Good luck with that, seriously.

I mean, I think it’s an attainable goal, but it’s gonna take a lot of work… and a far deeper understanding of the complexity of planet-scale ecosystems than we currently have, not to mention a more inclusive sort of politics that acknowledges and allows for different attitudes to the husbandry of planetary resources. To make a medical analogy, we’re still at the draining-humours-with-leeches stage of planetary management.

At no point do I suggest that we should “leave the animal kingdom alone to fend for itself”. Quite the contrary: we should repair the environments that support it, and – as far as is possible – give it space to exist without any interference from us whatsoever. A safari park planet, if you like… or you could think of it, perhaps, as the biosphere equivalent of declaring a heritage zone for protection. The biosphere gave rise to us, but our sentience does not implicitly grant us mastery over it – merely a custodial duty of care. Might does not make right. Which brings us to Dvorsky’s second point:

Second, and related to the first point, I think many of my detractors must have a very different definition of imperialism than I do. What they see as imperialism (though I’m not exactly sure what they’re suggesting humans are exploiting here) I see as compassion.

Oh, man, come on. The bringing of civilisation to “backwards” natives has always been framed in the rhetoric of compassion and moral duty – it’s all for their own good, right? The exploitation angle always comes after the intervention (though in some cases it may have been an unspoken motivation from the outset). And the last half a century or so is replete with examples of how essentially liberal impulses can still drive essentially imperialistic projects: I refer you first and foremost to America’s earnest but severely misguided (not to mention tragically blundered) attempts to spread the benefits of democracy and corporate capitalism to the developing world. And bear in mind that this has, in a number of cases, been done in places where the recipients of this attempted cultural uplift were able to observe and even desire the more visible trappings of the enfranchisement they were being offered (even if their understanding of the full consequences of said enfranchisement remained opaque, whether deliberately or not).

Shorter version: you can explain the possible benefits of cultural uplift to another human, and give them the choice (though the latter stage has historically been skimped upon more often than not, and the former rarely done as thoroughly and honestly as a clean conscience might require). But with non-human persons, with whom you have only a very limited framework of language through which to communicate extremely complex ideas, you don’t even have the option of warning them what’s to come. Is it not possible that we could get it right, and uplift an extended genetic family of great apes who’d be grateful to us for doing so? I don’t think it’s impossible. But I think we’d be in a much better position to take that chance once we’d demonstrated an ability to uplift our human sisters and brothers to the same position of privilege we already occupy. Don’t run before you can walk, y’know?

I find it interesting how many critics of uplift call upon Western norms and taboos to make their case, while my ethics is almost exclusively informed by Eastern philosophies, namely Buddhism. I look at animal uplift in the same way I do any other compassionate act in which a human or non-human animal is pulled-up from deplorable conditions, whether it be extreme poverty, or having to survive alone in the jungle.

Right, I’m no zoologist, but I think this portrayal of apes in misery “having to survive alone in the jungle” is anthropocentrism writ large. How can you be sure that the apes aren’t completely happy in the environment that they evolves to inhabit, or with the society and culture they’ve developed as a result? Sure, nature’s red in tooth and claw, and I’m not naive enough to think that apes – or any other animal – live in some sort of bucolic Eden. But who are we to decide on their behalf that a more human lifestyle would be preferable to them? I dare say it probably would be if the project of uplift succeeded in humanizing them, but again, you’d have made that decision to change their state of being on their behalf, because you’re so certain that human consciousness is the known peak of sentience. And of course you’re certain! I dare say if you could ask a well-fed dog in the midst of running after a thrown stick whether everyone would prefer to be a dog, they’d enthusiastically agree with your suggestion. Privilege breeds conceit.

Let’s try it another way: if you’re making an argument that apes should already have the rights of personhood conferred upon them, how can you not include the fundamental right of a person not to have major changes to their state of being made to them without their express consent? You can’t have your cake and eat it, guys; either apes are persons already, and hence deserving of your protection from those who would meddle with their state of being, or they’re not yet persons, and you’re making the indubitably anthropocentric assumption that the human state of being is superior to what they have already, and that they’d surely thank you for being raised to it.

Perhaps the latter is true, but here’s the thing – you only get to find out after you’ve done it. Our philosophical difference here is over whether that risk is a reasonable one to take given the potential rewards of the outcome. What worries me most about sitting down to do that particular bit of moral calculus is that while all the potential gain would accrue to the uplifted apes, so would all the potential risk.

You must not play god with the state of being of an entire species. Put the shoe on the other foot for a moment, and imagine the arrival of an alien species so far in advance of our own state of being that their motives, philosophies and moral framework are completely incomprehensible to us. We can see that they have conquered various technical and scientific problems which have thus far eluded us; as far as we can tell without being able to actually immerse ourselves in their culture, they seem happy and fecund and fulfilled, though their long-term goals are completely inscrutable, and they do many things that make no sense to us at all.

Now imagine said alien race starts plucking up a few randomly picked humans with the intent of making them more like the aliens. (This is, I believe, the basic concept of Octavia Butler’s Xenomorph series of novels, which are unquestionably postcolonial texts.) The end result is something neither human nor alien, but something in between, something carrying the legacy of a sociobiological experiment in which they had no say; something caught between two preexisting cultures, sprung from both, belonging to neither. Deliberately or not, you create an outsider species. Being as familiar with human emotions and attitudes as you must be (what with being one) can you really imagine your uplifted people having no resentment of this in-between state of being? Perhaps you can, but if that’s the case I humbly suggest you’ve had a very fortunate and privileged life already, and that doesn’t put you in a very good position for empathising with those who’ve not been so lucky in a manner that doesn’t – quite unintentionally – come out as condescension.

I’m going to issue a challenge to the opponents of animal uplift: Go back and live in the forest. I mean it. Reject all the technological gadgetry in your possession and all the institutions and specialists you’ve come to depend on. Throw away your phones, your shoes, your glasses and your watches. Denounce your education. As I’m sure I don’t have to remind anybody, it’s these things that have uplifted humanity from it’s more primitive “natural” state. Humans haven’t been truly human for thousands of years; we’ve been transhuman for quite some time now. If you reject animal uplift, then you must reject your very own transhuman condition.

Yeah, like that’s going to happen. Pretty easy to dismiss uplift from the position of privilege, isn’t it? Who’s the real imperialist, here?

I’ve been reading your stuff for many years now, George, and I really thought your rhetorical chops were up to a higher standard than this: a rough equivalent of “if you think life in Islamic Afghanistan is so awesome and deserving of protection, why don’t you go live there, huh?” (At least you’ve not gone so far as to wave whatever the uplift equivalent of the “it’s political correctness gone MAD!!!” banner might be.) Far from disproving my accusations of imperialist attitudes, you’ve actually strengthened them with this implicit labelling of ape culture as inferior to our own, as something they must be rescued from for their own good – after all, you’d find it impossible to cope with, so therefore it must be bad, and your life must hence be better!

The motive is pure, I’ll grant you – noble, even. But we all know what the road to hell is paved with, and you only have to look at Afghanistan (and Iraq, and countless other “backward” nation-states that have been thoroughly mangled by the neoliberal project to deliver Western-style cultural freedoms and economic liberty to places where it appeared to be lacking) to see plenty of examples that the liberal imperialist impulse is just as prone to enslaving or subjugating those it intends to uplift as the older monarchic imperialisms were.

I’m not suggesting that’s a deliberate outcome, mind you; I’m suggesting it’s a function of the inherently hierarchical way of looking at sentience that is powering this “obligation” to uplift. If you see sentience as a ladder with us stood on its highest rung and the apes a few rungs further down, then of course you’re going to feel you should pull them up the last few steps once you’ve clambered off onto the plateau at the top. But the anthropomorphic assumption here is that apes are as interested in climbing that cognitive ladder as you are. Heck, I’d bet you good money less than half your fellow humans would agree that humans climbing further up that ladder is an unmitigated good thing, and at least there you have the chance to make your case to someone who can potentially understand it. With the apes, you’re simply assuming your moral calculus will make them happy in the long run; as such, I return to my original diagnosis of well-intentioned hubris.

Ultimately, my argument boils down to this: if you truly believe that apes are human-like enough to deserve equivalent rights to us – a point on which I cautiously agree, I might add – then the first and greatest of those rights is the right not to have a new way of life forced upon you, whether “for your own good” or otherwise. Volition has to be a cornerstone of personhood. If it isn’t, where does volition enter the equation of sentience and ethics, exactly? This is a central question of postcolonial theory, and one which, I respectfully submit, we have not adequately answered in the context of our own species, let alone that of our genetic cousins.

This post is already running long (and eating a large chunk of my day), so I’ll leave discussing methods by which uplift might be achieved while still granting volition to its subjects for another day… though I will briefly note this part of Kyle Munkittrick’s response to my original post in that context:

My hope is that uplift technology will be based on our own human cognitive enhancement technology. Tech that enhances the mind as-is will enable animals to be more intelligent without altering their genes such that we change how an animal’s brain works. Animals uplifted in this way would contribute to neurodiversity and make Earth home to more than just one intelligent species.

OK, if you make the tools of enhancement non-invasive and volitional – to use a crude sf-nal example, by leaving brain-booster headsets laying around for apes to find and experiment with, if they so chose – then we’re talking about a very different ballgame. (And that gives me another opportunity to mention a favourite science fiction work in which that is one of the strands, namely Julian May’s Saga of Pliocene Exile…)