Gerontologist Aubrey de Grey gives his thoughts on the technological singularity (subtypes: intelligence explosion and accelerating change) in this interview in h+ Magazine:
I can’t see how the “event horizon” definition of the Singularity can occur other than by the creation of fully autonomous recursively self-improving digital computer systems. Without such systems, human intelligence seems to me to be an intrinsic component of the recursive self-improvement of technology in general, and limits (drastically!) how fast that improvement can be.
…
I’m actually not at all convinced they are even possible, in the very strong sense that would be required. Sure, it’s easy to write self-modifying code, but only as a teeny tiny component of a program, the rest of which is non-modified. I think it may simply turn out to be mathematically impossible to create digital systems that are sufficiently globally self-modifying to do the “event horizon” job.
My view, influenced by observation of the success of natural selection[1], is that “intelligence” is overrated as a driver of strictly technical progress. I would say that most technological advances come about as a result of empirical tinkering and application of social processes (like free markets and the scientific method), rather than pure thinkism and individual brilliance.
I can’t speak to the possibility of the globally self-modifying AI issue.
de Grey goes on to discuss Kurzweil’s accelerating change singularity subtype:
I think the general concept of accelerating change is pretty much unassailable, but there are two features of it that in my view limit its predictive power.
…
Ray acknowledges that individual technologies exhibit a sigmoidal trajectory, eventually departing from accelerating change, but he rightly points out that when we want more progress we find a new way to do it and the long-term curve remains exponential. What he doesn’t mention is that the exponent over the long term is different from the short-term exponents. How much different is a key question, and it depends on how often new approaches are needed.
Again, interesting, the tendency to assume that “something will show up” if (say) Moore’s law peters out is all very well, but IRL companies and individuals and countries can’t base their future welfare on the assumption that some cool new tech will show up to save us all.
Anyway, there’s more from de Grey in the interview.
[1]: The Origin of Wealth is a brilliant overview of the importance of evolutionary methods in business, technology, and the economy.
[image from sky#walker on flickr]