Tag Archives: Singularity

BOOK REVIEW: The Coming Convergence by Stanley Schmidt

The Coming Convergence - Stanley SchmidtThe Coming Convergence by Stanley Schmidt, PhD

Prometheus Books, April 2008; 275pp; $27.95 RRP – ISBN13: 9781591026136

The Coming Convergence nestles at the better (i.e. not too sensational) end of the pop-science niche, and could easily be strap-lined as “a beginner’s guide to the singularity”. Schmidt’s degree in physics means he’s no stranger to the scientific method, but his twenty-five years as editor of Analog Science Fiction Magazine suggests he should have a pretty decent grasp of how to make science into a story that’s engaging to read. I don’t doubt he has; what I do doubt, with hindsight, is my suitability as a reviewer for this book. Continue reading BOOK REVIEW: The Coming Convergence by Stanley Schmidt

The perils of “thinkism”

Kevin Kelly has an interesting comment on the technological singularity, vis a vis the assumption that given a sufficiently powerful digital computer you can accurately model the entire universe without needing correction from empirical “real world” evidence:

The notion of an instant Singularity rests upon the misguided idea that intelligence alone can solve problems.

As an essay called Why Work Toward the Singularity lets slip: “Even humans could probably solve those difficulties given hundreds of years to think about it.”

In this approach one only has to think about problems smartly enough to solve them.  I call that “thinkism.”

No amount of thinkism will discover how the cell ages, or how telomeres fall off. No intelligence, no matter how super duper, can figure out how human body works simply by reading all the known scientific literature in the world and then contemplating it.

Kelly points out that AIs should be “embodied in the world.” Other topics to consider are the impact of non-human-intelligences, based on genetic algorithms, improved data-mining methods, and evolution-based design (video link, via BoingBoing). These kinds of non-human intelligences will have/have already had profound effects.

[image from tanakawho on flickr]

Google – trying to predict the future by inventing it

Google logo on a whiteboardThe official Google blog isn’t always the most exciting of reads, but every now and again they post up something worth a read. Today saw the first of ten articles from the top boffins at the Googleplex to celebrate the company’s tenth anniversary; it’s about the future of cloud computing, and it hints at a fairly science fictional end-point:

Traditionally, systems that solve complicated problems and queries have been called “intelligent”, but compared to earlier approaches in the field of ‘artificial intelligence’, the path that we foresee has important new elements. First of all, this system will operate on an enormous scale with an unprecedented computational power of millions of computers. It will be used by billions of people and learn from an aggregate of potentially trillions of meaningful interactions per day. It will be engineered iteratively, based on a feedback loop of quick changes, evaluation, and adjustments.

Underneath that corporate gloss is the enthusiasm of researchers who believe they’re working toward a useful form of artificial intelligence. This isn’t news, of course – Larry Page has been quite open about that particular long-term goal – but it’s the assured confidence that Google has which never ceases to astonish. From the introduction to the article:

As computer scientist Alan Kay has famously observed, the best way to predict the future is to invent it, so we will be doing our best to make good on our experts’ words every day.

One can’t help but be reminded of the Genius Inventor archetypes of pulp science fiction… but in this case that blue-sky vision is backed up by the bankroll of one of the most powerful organisations on the planet. [image by dannysullivan]

So, is it hubris, hype or hope… or a mixture of all three? Should we fear the Big G, or look to it to usher in something like the Singularity and save us from ourselves? Or is AI just a pipe dream for big-budget geeks?

Singularity watch: Vinge on the future

raptureThe New York Times has a brief, appreciative item about Vernor Vinge and his novel Rainbows End. Here’s a nice if-this-goes-on snip:

“These people in ‘Rainbows End’ have the attention span of a butterfly,” [Vinge] said. “They’ll alight on a topic, use it in a particular way and then they’re on to something else. Right now people worry that we don’t have lifetime employment anymore. How extreme could that get? I could imagine a world where everything is piecework and the piece duration is less than a minute.”

[Image: cloudsoup]

Pragmatism and the Singularity

Singularity trading card - Friendly AIThe set of persons who know of the concept of the Vingean Singularity can be divided into two sets: those who believe it could happen, and those who believe it will always remain a science fiction metaphor.

Taking the former set, we can divide again: into people who believe the Singularity will come and fix everything for us, and people who believe that – unless we pull our own arses out of the ecological fire – the Singularity will never have the chance to occur, because its cradle civilisation will have snuffed itself out.

Into that latter set falls science fiction author Karl Schroeder:

“Picture a lonely AI popping into superconsciousness in the last research lab in the world. As the rioters are kicking in the doors it says, “I understand! I know the answer! Why, all we have to do is–” at which point some starving, flu-ravaged fundamentalist pulls the plug.”

To paraphrase – let’s cross that bridge when we’re safely across the one that’s crumbling beneath our feet.

Jamais Cascio takes a slightly more pragmatic approach to the matter, however:

“Karl seems to suggest that only super-intelligent AIs would be able to figure out what to do about an eco-pocalypse. But there’s still quite a bit of advancement to be had between the present level of intelligence-related technologies, and Singularity-scale technologies — and that pathway of advancement will almost certainly be of tremendous value to figuring out how to avoid disaster.”

I think I’m going to side with Cascio for now – closing the door on potential solutions just because they don’t seem immediately fruitful strikes me as counterproductive, though I agree with Schroeder that a healthy focus on the here-and-now is more sensible than kicking back and awaiting The Great Uploading. [the image is one of Jay Dugger’s Singularity Card Game cards]