Tag Archives: existential risk

Nick Bostrom on the possibility of posthumanity

Via my good buddies at grinding.be, here’s a video of transhumanist uber-philosopher Nick Bostrom talking about existential risks and the possibility of achieving the posthuman condition. It covers a bunch of topics we periodically return to here at Futurismic, but for those looking for an introduction to the ideas involved – including the original Von Neumann conception of the Singularity – it’s a great place to start.

I’ve known of Bostrom by name and reputation for some time, but I wasn’t aware that his website is full of links to his published papers, which vary from accessible layman’s introductions to the future of human evolution and existential risk, right on up to high-grade philosophical treatises on anthropic reasoning and technological ethics. Go take a look.

The potential perils of a world without nukes

nuclear fallout shelter signEven though we no longer live under the Cold War shadow of Mutually Assured Destruction (at least, not at the moment), there’s a whole lot of nuclear weapons sat around gathering dust, still just as lethal as they always were before.

I think many people would agree it’d be nice to be rid of nukes completely; the Obama administration seems keen on the idea, anyway, which – even if it’s just a symbolic political palm frond – is a reassuring change from the gung-ho realpolitik of the last decade.

But disarmament carries its own set of risks, as George Dvorsky points out:

There are a number of reasons for concern. A world without nukes could be far more unstable and prone to both smaller and global-scale conventional wars. And somewhat counter-intuitively, the process of relinquishment itself could increase the chance that nuclear weapons will be used. Moreover, we have to acknowledge the fact that even in a world free of nuclear weapons we will never completely escape the threat of their return.

[snip]

The absence of nuclear weapons would dramatically increase the likelihood of conventional wars re-emerging as military possibilities. And given the catastrophic power of today’s weapons, including the introduction of robotics and AI on the battlefield, the results could be devastating, even existential in scope.

So, while the damage inflicted by a restrained conventional war would be an order of magnitude lower than a nuclear war, the probably of a return to conventional wars would be significantly increased. This forces us to ask some difficult questions: Is nuclear disarmament worth it if the probability of conventional war becomes ten times greater? What about a hundred times greater?

And given that nuclear war is more of a deterrent than a tactical weapon, can such a calculation even be made? If nuclear disarmament spawns x conventional wars with y casualties, how could we measure those catastrophic losses against a nuclear war that’s not really supposed to happen in the first place? The value of nuclear weapons is not that they should be used, but that they should never be used.

It’s a tricky question; Dvorsky points out that he himself is very much in favour of disarmament, but the situation is not clear cut by any means. Idealism is shaky ground from which to argue against the destructive force of nuclear weapons. [image by brndnprkns]

Perhaps it will take some Watchmen-esque global existential threat to make the whole world put aside its differences at the same time as its nuclear arsenal… but the cynic in me suspects that the opposite would occur. After all, climate change hasn’t yet encouraged everyone to pull in the same political direction, has it?

Global collapse? What global collapse?

Apocalypse later?Futurist Brian Wang has had it with the doom-mongershe’s pretty sure there’s not going to be any global collapse, and he’s got a list of reasons why. Here are just a few:

1. Efficiency, conservation and an energy plans can be enhanced beyond current levels with minimal strain. There has been partially voluntary reductions in energy demand during the credit crisis. 10% reductions with minimal effort and 20% reductions with more austerity.

OK, seems reasonable.

5. In regards to global warming and environmental concerns:

  • a rapid switchover to totally clean power would stop the air pollution of coal and most oil and would greatly reduce any additional CO2
  • geoengineering can be used to reduce global temperatures if necessary
  • if the beliefs of climate change being from man-made sources are right then we are already geoengineering by accident as a side effect of our industry. It will be cheaper and easier to geoengineer to cancel those accidental side effects with intentional reversal efforts

Well, possibly, but geoengineering is a very speculative field indeed, as noted yesterday. And how are you going to defeat the political inertia on energy source changes?

9. Financial doom scenarios

  • Mandated resets of debt forgiveness, re-issuing script etc… can be used to reboot a country or a financial system
  • People and systems for production would still exist even if there was 1000 trillion in debt

Yes, but where’s the motivation for those hungry and desperate people going to come from?

Wang’s points all make sense, but they all seem to assume the presence of a strong and clearheaded global or national leadership which, most importantly, hasn’t lost the respect of its subjects or its power to organise them into productive and efficient units.

Wang frequently compares these potential responses to war-time mobilisation efforts, and as regards the scale of effort needed that comparison has validity. But I’m not so certain about his confidence in the psychology of a mobilisation of that sort; before his list, he says:

One thing of note is that most people usually think that Hitler and Stalin were bad guys for killing or causing the death of about 100 million people. Most of the civilization die off scenarios are that level of death each and every year for 70 years. 1000 times the number of deaths in the holocaust. Why is there the belief that significant mitigation efforts would not be made ?

Because political rhetoric is more easily focussed on an enemy with a face, perhaps?

The problem with existential threats is that they’re hard for our fundamentally selfish and short-range psychology to focus on. When you’ve not got enough to eat, your first priority will be filling your stomach, not saving the world. Mobilising people on the scale of nations takes a government with its people’s ear and trust, or at least their obedience under pressure… and with exception of some of the more totalitarian regimes on the planet, those are in short supply at the moment, and likely to be more so as the number of tangible existential risks increases, in my opinion. [image by sashomasho]

What do you think – would the world come together in the face of a genuine extinction event, or would it be every man for himself in the last days of civilisation?

Well, that was a close one…

… but you probably didn’t even notice it. Earlier this week, we apparently came within a cosmological gnat’s whisker of colliding with an asteroid of similar size to the one that caused the Tunguska astrobleme:

The asteroid, dubbed 2008 DD45, whizzed just 72,000 kilometres above the Earth’s surface. That is less than a fifth of the distance to the Moon and just twice the distance to geosynchronous satellites.

Yikes. It was first reported on Saturday; that’s all the warning we might have had. And had it been a bit bigger, it could have caused a planetary extinction event that would make the climate crisis look like a tea-party. Chalk another one up to human luck, eh?

Does a massive miscalculation mean the LHC really could destroy the world?

The LHC will eat your home planet. Maybe.Remember all that beef about the possibility of the LHC producing uncontrollable black holes that could DESTROY TEH WORLD OMG? Well, it’s still highly unlikely, but it turns out that the way these things are calculated aren’t as reassuring as we might perhaps want them to be:

The problem is compounded when the chances of a planet-destroying event are deemed to be tiny. In that case, these chances are dwarfed by the chances of an error in the argument. “If the probability estimate given by an argument is dwarfed by the chance that the argument itself is flawed, then the estimate is suspect,” say Ord and co.

Nobody at CERN has put a figure on the chances of the LHC destroying the planet. One study simply said: “there is no risk of any significance whatsoever from such black holes”.

Which means we are left with the possibility that their argument is wrong which Ord reckons conservatively to be about 10^-4, meaning that out of a sample of 10,000 independent arguments of similar apparent merit, one would have a serious error.

In layman’s terms, the above doesn’t mean that the LHC is dangerous, it just means that the assurances of its safety are predicated on flaky calculations. The difference between the two is left as an exercise for the reader. 😉 [via SlashDot; image by muriel_vd]