Tag Archives: existential risk

Singularity linkage

A few more Singularitarian nuggets have drifted into my intertubes dragnet over the weekend. Having not had much chance to read and absorb, I’ll just throw ’em up for those of you who’ve not got distracted by the Shiny Of The Day (whatever that might be – I’m suffering from a case of Momentary Zeitgeist Disconnection here at the moment).

First up, Charlie Stross is back with a mention of “Federov’s Rapture”, a sort of proto-extropianism philosophy with its roots in Russian Orthodox Xtianity:

A devout Christian (of the Russian Orthodox variety), “Fedorov found the widespread lack of love among people appalling. He divided these non-loving relations into two kinds. One is alienation among people: ‘non-kindred relations of people among themselves.’ The other is isolation of the living from the dead: ‘nature’s non-kindred relation to men.'” … “A citizen, a comrade, or a team-member can be replaced by another. However a person loved, one’s kin, is irreplaceable. Moreover, memory of one’s dead kin is not the same as the real person. Pride in one’s forefathers is a vice, a form of egotism. On the other hand, love of one’s forefathers means sadness in their death, requiring the literal raising of the dead.”

Federov believed in a teleological explanation for evolution, that mankind was on the path to perfectibility: and that human mortality was the biggest sign of our imperfection. He argued that the struggle against death would give all humanity a common enemy — and a victory condition that could be established, in the shape of (a) achieving immortality for all, and (b) resurrecting the dead to share in that immortality. Quite obviously immortality and resurrection for all would lead to an overcrowded world, so Federov also advocated colonisation of the oceans and space: indeed, part of the holy mission would inevitably be to bring life (and immortal human life at that) to the entire cosmos.

I doubt that comparisons to religious eschatologies is going to be any better received than accusations of magical thinking, but hey. (As a brief sidebar, I was probably primed for my own interest in Singularitarianism by the redeployment of Teilhard de Chardin‘s Omega Point idea in Julian May’s Galactic Milieu series.)

And here’s another two from the admirably prolific Michael Anissimov. First up, The Illusion of Control in a Intelligence Amplification Singularity, which is a complex enough piece to make a simple summing-up into a futile exercise, so go read the whole thing – there’s some valuable thinking in there. Though the opening paragraph pretty much sums up my concerns about Singularitarianism:

From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions.

I can understand the risks; it’s the likelihood I remain to be convinced of. And given all the other serious global problems we’re facing right now, having the Singularity “outweigh all other concerns” strikes me as narrowly hyperopic at best. How’s about post-corporatist economics? Energy generation, distribution and storage? Sanitation? Resource logistics? Global suffrage and a truly democratic system of governance? Climate change? These all strike me as far more immediate and pressing threats to human survival. A hard-takeoff Singularity as posited here is an existential risk akin to a rogue asteroid strike: certainly not to be ignored, but the response needs to be proportional to the probability of it actually happening… and at the moment I think the asteroids are the more pressing concern, even for us folks lucky enough to have the economic and cognitive surplus to spend our time arguing about stuff on the intertubes.

Secondly, another riposte to Alex Knapp:

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don’t add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are “good enough” to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian — it wouldn’t have to furrow its brow and try real hard to obey simple probability theory axioms.

Knapp himself crops up in the comments with a counter-response:

What I question is the scientific basis from which artificial general intelligence can be developed. More specifically, my primary criticism of AGI is that we don’t actually know how the mechanism of intelligence works within the human brain. Since we don’t know the underlying physical principles of generalized intelligence, the likelihood that we’ll be able to design an artificial one is pretty small. [This reminds me of the Norvig/Chomsky debate, with Knapp siding with Chomsky’s anti-black-box attitude. – PGR]

Now, if you want to argue that computers will get smart at things humans are bad at, and therefore be a complement to human intelligence, not only will I not disagree with you, I will politely point out that that’s what I’ve been arguing THE WHOLE TIME.

More to come, I expect. I really need to convince someone to let me write a big ol’ paid piece about this debate, so I can justify taking a week to read up on it all in detail…

Insight, foresight, moresight…

… the clock on the wall reads a quarter past midnight.

The world won’t wait for us to sort our civilisational shit out; even if you don’t believe that we’ve made the planet a less safe place for ourselves through our own actions, today’s events are a reminder that we have always lived on the sufferance of circumstance, and that bad things aren’t reserved for bad people, or even simply people we don’t care about.

The Earth is a sphere, folks. There’s only so far you can run, or so far you push everyone else away. One tiny lifeboat in an infinite ocean. Meanwhile, there’s a million and one ways we could be wiped out of existence with little or no warning, by nothing more than the blind unknowing caprice of a random universe. In the face of that risk, what are we doing? We’re working out ways of making ever greater profits out of those less fortunate than ourselves, arguing over who spilled the petrol rather than mopping it up, fiddling while the kids run around in the haylofts of Rome playing with matches.

Some days I really feel like we deserve to go extinct. Evolution should select pretty strongly against civilisational myopia, if I understand it correctly.

But look again and see all the amazing things we’ve achieved, in a span of time so tiny by comparison to the lifespan of our own solar system (let alone the universe) that it’s almost unmeasurable. Look at all the risks we’ve already invented our way past, all the demons we’ve already conquered. There’s very few threats facing us that we couldn’t defeat easily with a bit of collective will and determination, and the few that aren’t amenable to that sort of fixing can be significantly reduced by getting our act together sufficiently that we’re no longer dependent on the fragile life-support cradle that nurtured us this far.

Make no mistake: the greatest solvable extant threat to a human future is humanity itself. Divided we stand, united we fall.

It doesn’t have to be like this, it really doesn’t. Perhaps that makes me a foolish optimist, an idealistic dreamer, a naive child scared of the “grown up” world. Well, so be it. It’s either that or give up entirely… and as tempting as that is on an almost daily basis, I’m not ready to quit just yet.

SpaceFence: The Movie

Via FlowingData, here’s a sort of promo-documentary-advertorial-edutainment spot for Lockheed Martin’s Space Fence system, designed to protect us from rogue bits of crap colliding in orbit above us.

As remarked at FD, I think it’s likely that a lot of the visualisations here are speculative, but the result is something that looks momentarily convincing in that ultimately-unconvincing-once-you’ve-thought-about-it Hollywood way – designed to sell the concept rather than the actuality, in other words. (In other words, I expect the Space Fence control room will look a lot less like the bridge of a space opera dreadnought… though there’s a part of me that wishes that wasn’t the case.)

Makes sense, really; if you want to convince people that putting in expensive systems to mitigate (or at least monitor) potential existential risk problems is worthwhile, making them look a bit sexy is a good tactic. I suppose this is a kind of design fiction, too…

[ Note: my assumption that the footage in the video partakes in artistic license is just that, an assumption; I would very much like to see the real thing, or evidence that the footage represents the reality. If anyone at Lockheed is reading, I’d love to drop in and take a closer look… though you’d probably have to stump up my airfare. 🙂 ]

Existential risk simulator: throw asteroids at the Earth

Had a bad week? Or just simply looking for a way to kill time at work as the year winds down? How about simulating asteroid collisions with the Earth? [via Space.com]

Just to get the disappointment up front: you don’t get a Hollywood CGI rendering of your imaginary impact (though there is a sort of intro video of a rock falling into the gravity well that runs while the calculations are being done, a bit like a cut-scene from Bruce Willis Saves The Planet While Wearing a Grubby Wifebeater Vest: The Computer Game or something). But what you do get is a list of statistical stuff: energy released in impact, crater size, thermal radiation, that sort of thing.

So, a pretty decent tool for doing the worldbuilding due diligence on your apocalypse novel… or simply exercising your inner misanthrope (fluffy white lap-cat and hammy accent optional).

What could be worse than human extinction?

From a philosophical perspective, human extinction is just about the worst thing we can imagine… and it’s a fairly recent fear, too, with our conception of existential risk kick-started by the threat of mutually assured destruction. But what about a slow slide back into an animal state from our current civilisational peak? An evolutionary regression triggered by the impoverishment of the environment we mastered momentarily? [via BigThink]

Civilization obscures our similarity to other animals. We tend to hold ourselves to different standards because we see ourselves as above nature.  Many people find the slaughter of food animals objectionable. Yet no one is advocating intervention to save the gazelles from the lions or the rabbits from the foxes. Is the suffering of animals in the wild less important? Should we venture out in search of prey animals to rescue from their predators, and sick or injured animals in need of medical care? No, it would seem. It’s okay when nature imposes suffering on animals, but not when we do it. Similarly, it’s not okay when we are the subjects of nature’s cruelty.

Civilization has bestowed our species with a distorted self-image. Many people seem to have the impression that we operate independently of nature. We are fortunate that we’ve been able to act as though we are independent for as long as we have. If we don’t adjust our way of living so that it becomes sustainable, however, nature will eventually do this for us.

The worst case scenario is not that humans will become extinct, but that we will come to experience the cruel will of nature as other animals do. We can’t rule out the possibility that we will become more similar to our primate cousins in intelligence, behavior, and quality of life. We may be enjoying the peak of human intelligence, morality, and technological advancement.

On the face of it, this is just another finger-waggy “if we don’t sort things out soon… ” warning, but I think you can detach the results from the cause – there are any number of reasons we might find civilisation as we know it receding into the patchwork memories of the past. Indeed, given our tendency to prattle on about “the good old days”, you could probably convince a lot of people it was already happening…

But in recent years that nostalgic view of the-past-as-idyll has become more and more of an irritant to me. Despite the very real problems facing human beings as individuals and as a species, I think conditions and opportunities for the average person have been improving steadily for a long time (even though those improvements, like William Gibson’s future, are – sadly – not evenly distributed). This is perhaps the same myopia that makes us see the decline of the Western economies as a global recession: because things aren’t quite as easy for us in particular as they were a few decades back, then we’re obviously bound for hell in a handbasket, AMIRITES?

Well, I’m not so sure; I think we have it in us a species to survive, prosper and spread beyond the gravity well. But to achieve that, I suspect we’ll need to start thinking of ourselves as a species rather than as individual nations… which may turn out to be the greatest challenge we’ve ever come up against, rooted as it is in the very evolutionary processes that made us what we are.

Still – it’s worth a shot, wouldn’t you say? 🙂