Tag Archives: Ray-Kurzweil

Aubrey de Grey on the Singularity

pebblesGerontologist Aubrey de Grey gives his thoughts on the technological singularity (subtypes: intelligence explosion and accelerating change) in this interview in h+ Magazine:

I can’t see how the “event horizon” definition of the Singularity can occur other than by the creation of fully autonomous recursively self-improving digital computer systems. Without such systems, human intelligence seems to me to be an intrinsic component of the recursive self-improvement of technology in general, and limits (drastically!) how fast that improvement can be.

I’m actually not at all convinced they are even possible, in the very strong sense that would be required. Sure, it’s easy to write self-modifying code, but only as a teeny tiny component of a program, the rest of which is non-modified. I think it may simply turn out to be mathematically impossible to create digital systems that are sufficiently globally self-modifying to do the “event horizon” job.

My view, influenced by observation of the success of natural selection[1], is that “intelligence” is overrated as a driver of strictly technical progress. I would say that most technological advances come about as a result of empirical tinkering and application of social processes (like free markets and the scientific method), rather than pure thinkism and individual brilliance.

I can’t speak to the possibility of the globally self-modifying AI issue.

de Grey goes on to discuss Kurzweil’s accelerating change singularity subtype:

I think the general concept of accelerating change is pretty much unassailable, but there are two features of it that in my view limit its predictive power.

Ray acknowledges that individual technologies exhibit a sigmoidal trajectory, eventually departing from accelerating change, but he rightly points out that when we want more progress we find a new way to do it and the long-term curve remains exponential. What he doesn’t mention is that the exponent over the long term is different from the short-term exponents. How much different is a key question, and it depends on how often new approaches are needed.

Again, interesting, the tendency to assume that “something will show up” if (say) Moore’s law peters out is all very well, but IRL companies and individuals and countries can’t base their future welfare on the assumption that some cool new tech will show up to save us all.

Anyway, there’s more from de Grey in the interview.

[1]: The Origin of Wealth is a brilliant overview of the importance of evolutionary methods in business, technology, and the economy.

[image from sky#walker on flickr]

Singularity lacking in motivation

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]

The slowing of technological progress

technology_plug_laptopAlref Nordmann writes in IEEE Spectrum of how technological progress is, contrary to the promises of singularitarians like Ray Kurzweil, actually slowing down:

Technological optimists maintain that the impact of innovation on our lives is increasing, but the evidence goes the other way. The author’s grand mother lived from the 1880s through the 1960s and witnessed the adoption of electricity, phonographs, telephones, radio, television, airplanes, antibiotics, vacuum tubes, transistors, and the automobile. In 1924 she became one of the first in her neighborhood to own a car. The author contends that the inventions unveiled in his own lifetime have made a far smaller difference.

Even if we were to accept, for the sake of argument, that technological innovation has truly accelerated, the line ­leading to the singularity would still be nothing but the simple-minded ­extrapolation of an existing pattern. Moore’s Law has been remarkably successful at describing and predicting the development of semiconductors, in part because it has molded that development, ever since the semiconductor manufacturing industry adopted it as its road map and began spending vast sums on R&D to meet its requirements.

there is nothing wrong with the singular simplicity of the singularitarian myth—unless you have something against sloppy reasoning, wishful thinking, and an invitation to irresponsibility.

This is the same point made by Paul Krugman recently. Nordmann points out that most of the major life-changing technological changes of the past 100 years had all already happened by about the 1960s, with the IT revolution of the last fifty years being pretty much the only major source of technological change[1] to impact him over his lifetime.

This arguments suggests that the lifestyle of citizens industrialised countries will remain fairly stable for a lengthy period of time. It raises the serious point that the best we can hope for vis a vis technological change over the next few decades will just be incremental improvements to existing technologies, and greater adoption of technologies by people in poorer countries.

This would be no bad thing of course, but the suggestion that Ray Kurzweil’s revolutions in nanotechnology, genetics, biotechnology, and artifical intelligence may not arrive as early as Kurzweil predicts is pretty disappointing.

It could be that, to paraphrase William Gibson, the future is in fact here, it’s just not evenly distributed.

[1]: By “major source of technological change” I mean things like antibiotics, mass personal transport, and heavier-than-air flight. There certainly have been improvements in all these areas in the last 50 years, and much wider adoption, but these have not had as great an initial impact.

[from IEEE Spectrum, via Slashdot][image from Matthew Clark Photography & Design on flickr]

Longevity personality traits

personalityTo those of us with an interest in living long enough to live forever any indicator of exceptional longevity is of interest. Here researchers have identified particular personality traits associated with longevity:

Because personality traits have been shown to have substantial heritable components, the researchers hypothesized that certain personality features may be important to the healthy aging observed in the offspring of centenarians.

Both the male and female offspring of centenarians scored in the low range of published norms for neuroticism and in the high range for extraversion. The women also scored comparatively high in agreeableness. Otherwise, both sexes scored within normal range for conscientiousness and openness, and the men scored within normal range for agreeableness.

Obviously you can’t do much to change your personality, but the conclusions are interesting.

[from Physorg][image from kol on flickr]

Ray Kurzweil: the Movie

Via George Dvorsky, here’s the trailer for Transcendent Man, the forthcoming film about the life and work of Ray Kurzweil:

I’m pretty convinced that Kurzweil actually believes what he says, though only time will tell whether he’s right or not. However, this trailer doesn’t do much to disrupt Kurzweil’s image as a kind of pseudo-religious techno-prophet; disengaging from the subject matter and looking purely at the language and framing, it seems to set him up as a misunderstood Messiah, and that tends to fire up my instinctive BS detector much more than speculations on the developmental curve of technology.

What’s your take on Kurweil – deluded crank or visionary genius? Or something in between?