Kyle Munkittrick is making waves over at the Discover Magazine Science Not Fiction blog; he decided to air the transhumanist movement’s ideas in a post entitled “The Most Dangerous Idea In The World“.
Given that Discover is a fairly mainstream (if geeky) publication, there was a fair bit of fervent push-back in the comments thread, so Munkittrick collected together the five most common riffs for rebuttal, creating one of the most lucid and reasonable “don’t panic” posts about transhumanism in a mainstream publication that I think I’ve ever seen. His bounce-back against accusations of [transhumanism=eugenics=evil] is particularly good, and broadly applicable:
Eugenics, like any technology, is neutral. “Eu” is actually the Greek root for “good.” The problem is that over history a lot of nasty people felt that they should be able to force their definition of “good” on others. Though Hitler is a common example, there was a eugenics program in the US for quite sometime that coercively sterilized those deemed unworthy to reproduce, due to race, economic status, and mental condition. Both programs are considered “negative eugenics” in that they prevent unwanted individuals from reproducing. Positive eugenics is different in two key ways. The first is that it is entirely voluntary. Whether parents want to merely screen for potential diseases, fine-tune every detail of their child’s traits, or leave the whole thing to chance is their prerogative. The second difference is that there is no “ideal”–the process is open ended. Instead of eugenics having a state-decreed goal like blond hair and blue eyes, every parent would decide what is best for their child. As most people want healthy, intelligent, happy children, those traits are what would define the “good” of positive eugenics.
It’s interesting to watch transhumanism entering mainstream consciousness; there was that widely-linked “Open Letter to Christian Leaders on Biotechnology and The Future Of Man” doing the rounds a week or so ago, and it’s a topic that keeps cropping up in non-geek media channels with increasing regularity, probably because it pushes every future-shock techno-fear button on the switchboard.
It’s also going to be interesting to watch how transhumanism reacts to increased scrutiny, because it’s a long way from being a monoculture. The last few years have seen the more serious and level-headed advocates (I’m thinking of folk like George Dvorsky and Mike Anissimov, who are the two I’ve been reading for the longest) working hard to present a coherent, rational and non-incendiary platform for debate… but just as with any subculture, there are some real oddballs in the architecture, and it’s the cranks who tend to shout loudest and attract attention, often negative. Interesting times ahead…
Bonus: Michael Anissimov points to Eliezer Yudkowsky’s “5 minute introduction” to the concept of the Technological Singularity, which is also pretty plainly-put. Of course, the Technological Singularity shouldn’t be conflated with transhumanism, but it’s a closely related idea, and is sometimes treated as an ideology rather than a theory by those more vocal and marginal elements to whom I referred earlier… so it behoves the wise to understand both as best they can. 🙂
“Of course, the Technological Singularity shouldn’t be conflated with transhumanism, but it’s a closely related idea, and is sometimes treated as an ideology rather than a theory”
Woah woah woah… The “Technological Singularity” is a theory? I hope you don’t mean a scientific theory (AKA one that matters beyond the tenure aspirations of the over-educated and under-useful). Let’s do some good old consensus-reality checking with our favorite digital manifestation of consensus reality, Wikipedia:
“A scientific theory is constructed to conform to available empirical data about such observations, and is put forth as a principle or body of principles for explaining a class of phenomena.”
Now let’s look at some of the bedrock assumptions of the Tech Singularity Theory (Theory Of Omniware?) as described in Yudkowski’s Singularity-n00b training manual.
“Sometime in the next few decades, we’ll start developing technologies that improve on human intelligence. We’ll hack the brain, or interface the brain to computers, or finally crack the problem of Artificial Intelligence.”
What empirical data do we have that would *necessitate* this event happening in the given time frame? 1.) Moore’s Law? I think that silicon snake-oil has been thoroughly debunked and dustbinned in AI circles. A faster machine does not a more intelligent machine make. Necessary, but not sufficient. 2.) Lots of money and people are working on Singularity-related technologies? Firstly, that’s hardly a quantifiable metric which can be used to accurately predict specific inventions. We’ve already spent 40+ years, millions or billions of hours of PhD+ man-hours, and untold billions of dollars on “cracking Artificial Intelligence” and getting Teh Trodes so you can jack straight across into nerd rapture-land. Experts have been triumphantalismically declaring we humans were going to “crack Artficial Intelligence” in the next ten years for the last half-century. I don’t think any really serious AI or wetware-hardware interface researchers have any real idea when we’ll start uploading ourselves up into e-brains or making vast leaps in our intelligence without some serious cortex acrobatics and intellectual dishonesty. The digi-rapture days are pulled out of asses, not scientifically testable, falsifiable theories.
So perhaps it’s valid to call it a hope, or a speculation, or a hunch, maybe an educated hunch. But if we start calling the Technological Singularity a scientific theory, then I think we’re really giving the creationists bullets, opening the doors for the anti-science “community” to call hypocrite.
And in the hands of passionate leaders iPad-thumping in front of powerpoint slide after powerpoint slide of exponential curves about the wonders of imminent eternal digital life and miraculous computer-salvation of all the world’s problems in an instant? Yeah, maybe ideology is a fair term.