Tag Archives: existential risk

Hawking still hawking the Great Diaspora

Stephen Hawking made a fair media splash back in 2006 when he announced that humanity needs to clamber out of the gravity well if it wants to ensure its survival. But the internet’s memory is short, despite its theoretical depth, and hence Hawking’s reiteration of the call for a Great Diaspora in a video interview at BigThink is rippling around the world again.

The logic of the argument is pretty inescapable – when all your eggs are in one basket, the odds of losing the game to the statistical inevitability of a global extinction event tend towards unity – but it will be interesting to see how the response differs this time round, given how much the world has changed since 2006. We seem a lot more focussed on the immediate future than we were… and that’s not necessarily a bad thing, though I think we’d be wise to keep one eye on the horizon. Is it just me, or did everything seem a whole lot more optimistic back then, before the economic implosion was anything more than a grim warning on the lips of a few outsider economists?

Or was it just me that was more optimistic, perhaps? Strange how five years of blogging about the future has made me a lot less confident that everything will work out just fine.

Doom du jour: volcanic eschatology

Icelandic volcano with touristsNo points for knowing why it’s such a hot button topic*, but everybody’s talking about volcanoes these days. And it’s gloomy stuff, too; it’s been known for a while that the massive but (currently) dormant volcano under Yellowstone National park in the US could decide to pop off at any time (although apparently the meme about that eruption being “overdue” is unfounded), and now it transpires that Mount Fuji in Japan may well be shuffling its feet and clearing its throat in preparation to burst into song. [image by Hello, I am Bruce]

So, just in case the more immediate issues in the news aren’t depressing you enough, here’s Wired UK‘s survival expert Andy Hamilton explaining what would happen if Yellowstone was to go off:

Those within the vicinity will be incinerated as temperatures from the lava flow can reach up to 500 degrees, meaning all surrounding cities will be utterly destroyed. If you somehow managed to survive the fast flowing lava, the thick ash cloud that would rain down would choke you to death. All the states surrounding Wyoming would certainly perish very quickly. The UK and the rest of Earth would not escape. We would all be affected, wherever we were. Global temperatures would plummet by at least 21 degrees. This could last for many years, meaning that all plant life will slowly die off. We will have no vegetables; animals — our meat — will have no food, so humankind would likely starve.

Sheesh – how’s that for existential risk? I think I’m going to head to the local shop, max out my cards on tinned goods and strong alcohol (and maybe a crossbow), and then nail the door shut from inside before settling down to watch The Road on perpetual loop…

[ * I love the way that photoset is titled “Iceland’s Disruptive Volcano”, like it’s some recalcitrant child at the back of the classroom. Send it home with a stiff note to its parents, I say. ]

The transhuman victory is assured!

Well, possibly not – but Michael Anissimov has a post provocatively titled “Transhumanism Has Already Won”, which argues that most of the central tenets of the movement (if such a fractious meme can fairly be called a movement at all) are already accepted – and in some cases actively desired – by a large portion of the world’s population:

Billions of people around the world would love to upgrade their bodies, extend their youth, and amplify their powers of perception, thought, and action with the assistance of safe and tested technologies. The urge to be something more, to go beyond, is the norm rather than the exception.


The mainstream has embraced transhumanism. A movie about using a brain-computer interface to become what is essentially a transhuman being, Avatar, is the highest-grossing box office hit of all time, pulling in $2.7 billion. This movie was made with hard-core science fiction enthusiasts in mind. About them, James Cameron said, “If I can just get ‘em in the damn theater, the film will act on them in the way it’s supposed to, in terms of taking them on an amazing journey and giving them this rich emotional experience.” A solid SL2 film, becoming the world’s #1 film of all time? It would be hard for the world to give transhumanism a firmer endorsement than that.

I’m not sure how solid an argument the success of an h+-themed movie is in this context, to be honest – though I’ll concede that entertainment media are powerful vectors for new ideas to enter mainstream discourse, even if their portrayal is essentially superficial.

But there’s more, which sees Anissimov explicitly repudiating the elitist devil-take-the-hindmost attitude that tends to be assumed (sometimes erroneously) as the transhumanist default:

When people write an article about a problem, it’s usually because they have a ready-made answer they want to sell you. But sometimes the universe just gives us a problem and it has no special obligation to give us an answer. Transhumanity is like that. Whatever answer we come up with may be a little messy, but we have to come up with something, because otherwise the future will play out according to considerations besides global security and harmony. Power asymmetry is not an optional part of the future — it is a mandatory one. There will be entities way more powerful than human. Where will they be born? How will they be made? These questions are not entirely hypothetical — the seeds of their creation are among us now. We have to decide how we want to give birth to the next influx of intelligent species on Earth. The birth of transhumanity will mean the death of humanity, if we are not careful.

Will it be possible for us to keep a sufficiently watchful eye on the privileged and powerful in order to stop them leaving us in the wake of their ascension? Difficult or not (and assuming transhumanism isn’t an unattainable omega point after all, which is another debate entirely), it’s got to be a better option than trying to ban or legislate around the problem.

Back to the future of the past? Venture capitalist advocates a return to radical futurism

Advocates of science fictional thinking crop up in the weirdest places. For example, Peter Thiel helped found PayPal and invested early in Facebook, and his main business is hedge funds and venture capital (which may predispose one to take his ideas with a large pinch of salt, given the economic events of the last couple of years), but he also invests in the sorts of venture that seem to have leapt right off the pages of old-school science fiction novels: sub-oceanic human colnisation projects, life extension research and private space flight, for instance.

So why does a man with that much money sloshing around want to invest in blue-sky futurism? Because he believes that radical progress is the only thing that will keep the existential wolves from civilisation’s door:

Wired: You’re worried about economic stagnation, but you’re optimistic about artificial intelligence and space?

Thiel: I think we have to make those things happen. We should be looking at technologies that might lead to really big breakthroughs. As a starting point, let’s just go back to the science fiction novels of the 1950s and ’60s and try to run the past 40 years again.

Wired: We need underwater cities and flying cars, otherwise we’re going bankrupt?

Thiel: We go bankrupt if radical progress doesn’t happen and we don’t realize it’s not happening. That’s a dangerous combination.

It’s a strange and topsy-turvy world when venture capitalists advocate wild flights of fanciful imagination while science fiction writers advocate plausible extrapolations from the status quo, don’t you think? 😉