A few more Singularitarian nuggets have drifted into my intertubes dragnet over the weekend. Having not had much chance to read and absorb, I’ll just throw ’em up for those of you who’ve not got distracted by the Shiny Of The Day (whatever that might be – I’m suffering from a case of Momentary Zeitgeist Disconnection here at the moment).
First up, Charlie Stross is back with a mention of “Federov’s Rapture”, a sort of proto-extropianism philosophy with its roots in Russian Orthodox Xtianity:
A devout Christian (of the Russian Orthodox variety), “Fedorov found the widespread lack of love among people appalling. He divided these non-loving relations into two kinds. One is alienation among people: ‘non-kindred relations of people among themselves.’ The other is isolation of the living from the dead: ‘nature’s non-kindred relation to men.'” … “A citizen, a comrade, or a team-member can be replaced by another. However a person loved, one’s kin, is irreplaceable. Moreover, memory of one’s dead kin is not the same as the real person. Pride in one’s forefathers is a vice, a form of egotism. On the other hand, love of one’s forefathers means sadness in their death, requiring the literal raising of the dead.”
Federov believed in a teleological explanation for evolution, that mankind was on the path to perfectibility: and that human mortality was the biggest sign of our imperfection. He argued that the struggle against death would give all humanity a common enemy — and a victory condition that could be established, in the shape of (a) achieving immortality for all, and (b) resurrecting the dead to share in that immortality. Quite obviously immortality and resurrection for all would lead to an overcrowded world, so Federov also advocated colonisation of the oceans and space: indeed, part of the holy mission would inevitably be to bring life (and immortal human life at that) to the entire cosmos.
I doubt that comparisons to religious eschatologies is going to be any better received than accusations of magical thinking, but hey. (As a brief sidebar, I was probably primed for my own interest in Singularitarianism by the redeployment of Teilhard de Chardin‘s Omega Point idea in Julian May’s Galactic Milieu series.)
And here’s another two from the admirably prolific Michael Anissimov. First up, The Illusion of Control in a Intelligence Amplification Singularity, which is a complex enough piece to make a simple summing-up into a futile exercise, so go read the whole thing – there’s some valuable thinking in there. Though the opening paragraph pretty much sums up my concerns about Singularitarianism:
From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions.
I can understand the risks; it’s the likelihood I remain to be convinced of. And given all the other serious global problems we’re facing right now, having the Singularity “outweigh all other concerns” strikes me as narrowly hyperopic at best. How’s about post-corporatist economics? Energy generation, distribution and storage? Sanitation? Resource logistics? Global suffrage and a truly democratic system of governance? Climate change? These all strike me as far more immediate and pressing threats to human survival. A hard-takeoff Singularity as posited here is an existential risk akin to a rogue asteroid strike: certainly not to be ignored, but the response needs to be proportional to the probability of it actually happening… and at the moment I think the asteroids are the more pressing concern, even for us folks lucky enough to have the economic and cognitive surplus to spend our time arguing about stuff on the intertubes.
Secondly, another riposte to Alex Knapp:
To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don’t add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.
The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are “good enough” to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian — it wouldn’t have to furrow its brow and try real hard to obey simple probability theory axioms.
Knapp himself crops up in the comments with a counter-response:
What I question is the scientific basis from which artificial general intelligence can be developed. More specifically, my primary criticism of AGI is that we don’t actually know how the mechanism of intelligence works within the human brain. Since we don’t know the underlying physical principles of generalized intelligence, the likelihood that we’ll be able to design an artificial one is pretty small. [This reminds me of the Norvig/Chomsky debate, with Knapp siding with Chomsky’s anti-black-box attitude. – PGR]
Now, if you want to argue that computers will get smart at things humans are bad at, and therefore be a complement to human intelligence, not only will I not disagree with you, I will politely point out that that’s what I’ve been arguing THE WHOLE TIME.
More to come, I expect. I really need to convince someone to let me write a big ol’ paid piece about this debate, so I can justify taking a week to read up on it all in detail…
A disturbing thought occurs to me that perhaps “getting religion” over Singularitarianism is necessary in order to avoid messing it up through neglect. On the other hand, it might also be a prime ingredient in messing it up through misguided interference.
A rational analysis suggests that most “Singularitarities” (we can reasonably anticipate) will be bad for a percentage of humanity. We do not know (at all) what we can do now to increase the percentage of “Singular Transition Event”.
Worse there may be STE’s that extremely bad for a few humans, and very good for a large number of humans. Is it true that some LibTrans argue “we may have to accept some loss of human life to attain an acceptable Singularity’ ? Such an attitude would be fascism, and a de facto crime against humanity by proxy. It would be worse than an American stating ‘global warming wouldn’t be bad for the US since we’d have longer agricultural seasons – too bad for people in the third world – they don’t have any nuclear weapons to force us to act different”
Could there be a morphology of Singularities?
Note that a world without radical technological advancement is unacceptable. We have 10 billion humans by 2050 whereas most oil and fossil fuel, most biospheres, most ocean habitats, most mineral reserves will be in major decline or depleted. In other words – a world without those and with 10 billion humans, and no new technologies to save our collective human butts is very close to Dantes inferno.
Which makes me order in preference:
1 – A world with a totall benevolent STE (or santa-larity or annisimovularity)
2 – A fast advancing world with widespread abundance and peace (star trek/jetsons world)
3 – A world where collapse is mitigated by hard but fair sustainability measures (a zeitgeist world)
4 – A world with a STE that is fairly good for most people
5 – A STE that mercifully, expediently and quickly euthanizes all of humanity
6 – Escalated Global Dystopian Cyberpunk/Corporatism/Fascism/Americanism (SLA Industries)
7 – Any future that results in a lengthy dark age after a painful collapse (Globafghanistan)
8 – The most horrific possible singularities (Satanlarity of Cthulhularity)
Whatever. 2045? I’ll be dead before anything like this happens, I don’t care unless I get to live it.