The announcement of Ray Kurzweil’s Singularity University project (and the inevitable backlash against it) has people talking about the S-word again… much to the ire of transhumanist thinkers like Michael Anissimov, who points out that there are three competing ‘schools’ of thinking about the Singularity, each of which hinges on a different interpretation of a word that has, as a result, lost any useful meaning.
The “Accelerating Change” school is probably the closest to Kurzweil’s own philosophy, but it is also Kurzweil’s quasi-religious presentation style (not to mention judicious hand-waving and fact-fudging) that makes it the easiest to attack. [image by null0]
Anissimov finds himself closer to the “Event Horizon” and “Intelligence Explosion” schools:
These other schools point to the unique transformative power of superintelligence as a discrete technological milestone. Is technology speeding up, slowing down, staying still, or moving sideways? Doesn’t matter — the creation of superintelligence would have a huge impact no matter what the rest of technology is doing. To me, the relevance of a given technology to humanity’s future is largely determined by whether it contributes to the creation of superintelligence or not, and if so, whether it contributes to the creation of friendly or unfriendly superintelligence. The rest is just decoration.
That may not actually sound any more reassuring than Kurzweil’s exponential curve of change to many people – if not even less so. And with good reason:
That’s the thing about superintelligence that so offends human sensibilities. Its creation would mean that we’re no longer the primary force of influence on our world or light cone. Its funny how people then make the non sequitur that our lack of primacy would immediately mean our subjugation or general unhappiness. This comes from thousands of years of cultural experience of tribes constantly killing each other. Fortunately, superintelligence need not have the crude Darwinian psychology of every organism crafted by biological evolution, so such assumptions do not hold in all cases. Of course, superintelligence might be created with just that selfish psychology, in which case we would likely be destroyed before we even knew what happened. Prolonged wars between beings of qualitatively different processing speeds and intelligence levels is science fiction, not reality.
Superintelligence sounds like a bit of a gamble, then… which is exactly why its proponents suggest we need to study it more vigorously so that – when the inevitable happens – we’re not annihilated by our own creations.
But what’s of relevance here is the sudden attempts by a number of transhumanist and Singularitarian thinkers to distance themselves from Kurzweil’s PT Barnum schtick in search of greater respectability for their less sensationalist ideas. Philosophical schisms have a historical tendency to become messy; while I don’t expect this one to result in bloodshed (although one can’t completely rule out some Strossian techno-jihad played out in near-Earth Orbit a hundred years hence), I think we can expect some heated debate in months to come.
I don’t entirely agree with Anissimov’s point (from the post) that:
Technologies like anti-aging treatments or a cure for AIDS don’t have any relevance to humanity’s future?
And also: since when was pure intellectual ability the primary driver of technological progress? Many developments come about through tinkering and experiment, rather than a single all-encompassing insight (Einstein, Newton et al notwithstanding).
But it’ll certainly be fun to watch how singularitarian ideology develops in the future.
Most Singularitarians of all stripes seem to think that if we achieve superhuman-scale hard AI or significant human intelligence enhancement on Monday, that by Tuesday morning the competition for ownership of Earth will be over and ordinary humans will have lost. This assumes that pure intelligence doesn’t need physical instrumentality to get things done, which is rather a silly assumption. There are some worst-case scenarios where things get dicey: milnet wakes up and uses the nuclear codes to atomize anything that could compete with it (the “Terminator” scenario); or the superintelligence happens to both be knowledgeable about microbiology and to have access to the right lab equipment and material so that it can create a plague that can wipe out most of the human race. All of these scenarios have some basic dependence on having access to special resources. It shouldn’t be all that hard to make sure that any experiments in superintelligence are done inside firewalls that prevent that access.
The real problem is to make sure that a new superintelligence has similar values to us, and thinks of us as its relatives (we do this all the time with our kids, so it can’t be that hard).
Yes, but raising life expectancy contributes to the creation of friendly or unfriendly superintelligence.