Tag Archives: Singularity

Davis asks Lethem about Dick

H+ Magazine puts out some interesting content, even if you don’t consider yourself a transhumanist: here’s Erik “Techgnosis” Davis interviewing Jonathan Lethem about science fiction legend Philip K Dick:

For people familiar with Dick‘s personal experiences, his biography and his temperament, the ironies in that are deep and bitter and complicated. You inevitably think: if he‘d been alive, he would‘ve screwed this up. He would‘ve found some way to make it impossible that he could be treated with such simple reverence, because he was so distrustful of any form of institutional authority. He had a particularly deep, bitter and twisted suspiciousness about traditional literary authority and about academia. And frankly, to some extent, it‘s academia that‘s driven his acceptance in a canon.

When I was a kid and I discovered Philip K. Dick, I felt that I‘d made this kind of soul mate contact with his work. It‘s a defining experience, and it feels like it‘s innate. For me, that experience was absolutely bound up in finding these books that were out of print. The books almost seemed like fictional artifacts. I couldn‘t believe there was such a writer. I still remember thinking his name seemed weird or that his titles seemed preposterous to me. It was like a secret reality unfolding in my life.

Of course, H+ is as H+ does, and the Singularity gets a little look-in. However, Lethem isn’t convinced that our technologies are changing us as much as we think they are:

My best guess about such matters is that each technological transformation, up to and perhaps including the Singularity, is going to work itself out vis-à-vis “the human” according to the deep principles of all media. Defined in its largest sense, as including things like cinema, theory, drugs, computing, moving type, music, etcetera, media is utterly consciousness-transforming in ways we can no longer competently examine, given how deeply they‘ve pervaded and altered the collective and individual consciousness that would be the only possible method for making that judgment. And yet -— we still feel so utterly human to ourselves, and the proof is in the anthropomorphic homeliness that pervades the ostensibly exalted “media” in return. We humanize them, shame them, colonize and debunk them with our persistent modes of sex and neurosis and community and commerce. We turn them into advertisements for ourselves, rather than opportunities for shedding ourselves. At least so far.

Well worth a read.

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?

The sentient Love Machine: Second Life creator planning metaverse Singularity?

Regular readers may remember me mentioning LoveMachine Inc., the new project of Second Life creator Philip Rosedale, back in November of last year. At that point, all the signs pointed toward LoveMachine being a start-up that intended to develop a reputational currency system for virtual worlds… and for all we know, it probably still is.

But thanks to SL uber-journalist Wagner James Au, we hear that Rosedale and company have added another project to the company roster. Its title? “The Brain: Can 10,000 computers become a person?”

Rosedale has long been interested in artificial intelligence, and the metaverse would seem like the ideal platform for that sort of research. Rosedale is playing his cards close to his chest at this point (and the cynic in me suspects that there’s an element of publicity-seeking involved, which I’ve gone and indulged by posting about it), but given LoveMachine’s open-frame “pick a task and join the team” approach to recruitment and the number of floating tech geniuses in San Francisco, I’d guess he’s no less likely to make progress than anyone else in the same field… provided that’s where the company’s focus stays put, of course.

And there’s no guarantee of that, either. LoveMachine’s remit is somewhat peripatetic, as is its culture, with Rosedale and chums setting up shop for the day anywhere they can find comfy seats and free wireless internet. Even if the dreams of metaverse AI come to nothing, LoveMachine may end as a blueprint for a new sort of company that, as Au points out, sounds like something out of William Gibson’s early novels: a loose, ad-hoc collective of tech geeks and console cowboys, working wherever they can find a flat surface and some bandwidth, building new things in imaginary spaces.

Aubrey de Grey on the Singularity

pebblesGerontologist Aubrey de Grey gives his thoughts on the technological singularity (subtypes: intelligence explosion and accelerating change) in this interview in h+ Magazine:

I can’t see how the “event horizon” definition of the Singularity can occur other than by the creation of fully autonomous recursively self-improving digital computer systems. Without such systems, human intelligence seems to me to be an intrinsic component of the recursive self-improvement of technology in general, and limits (drastically!) how fast that improvement can be.

I’m actually not at all convinced they are even possible, in the very strong sense that would be required. Sure, it’s easy to write self-modifying code, but only as a teeny tiny component of a program, the rest of which is non-modified. I think it may simply turn out to be mathematically impossible to create digital systems that are sufficiently globally self-modifying to do the “event horizon” job.

My view, influenced by observation of the success of natural selection[1], is that “intelligence” is overrated as a driver of strictly technical progress. I would say that most technological advances come about as a result of empirical tinkering and application of social processes (like free markets and the scientific method), rather than pure thinkism and individual brilliance.

I can’t speak to the possibility of the globally self-modifying AI issue.

de Grey goes on to discuss Kurzweil’s accelerating change singularity subtype:

I think the general concept of accelerating change is pretty much unassailable, but there are two features of it that in my view limit its predictive power.

Ray acknowledges that individual technologies exhibit a sigmoidal trajectory, eventually departing from accelerating change, but he rightly points out that when we want more progress we find a new way to do it and the long-term curve remains exponential. What he doesn’t mention is that the exponent over the long term is different from the short-term exponents. How much different is a key question, and it depends on how often new approaches are needed.

Again, interesting, the tendency to assume that “something will show up” if (say) Moore’s law peters out is all very well, but IRL companies and individuals and countries can’t base their future welfare on the assumption that some cool new tech will show up to save us all.

Anyway, there’s more from de Grey in the interview.


[1]: The Origin of Wealth is a brilliant overview of the importance of evolutionary methods in business, technology, and the economy.

[image from sky#walker on flickr]

Singularity lacking in motivation

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]