How I learned to stop worrying and love the Singularity

Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:

My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.

In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.

B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that  very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.

As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂

So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?

Hackers rake off big bucks from EU carbon exchange

Another case of life imitating (very) contemporary science fiction, the book in question being Ian McDonald’s excellent The Dervish House: some sneaky shenanigans via the compromised accounts of Czech traders has allowed some black-hat hacker types to rake off millions of dollars from the EU carbon trading exchange [via SlashDot]. Worse still, this is far from the first embarrassment of this type that the exchange has suffered from

Brixton reimagined as favela for robot workers

Urban futurism, offered without comment: via the incomparable BLDGBLOG, this image by the wonderfully-monicker’d Kibwe X-Kalibre Tavares is called “Southwyck House”, and is part of a set of similar images “of what Brixton could be like if it were to develop as a disregarded area inhabited by London’s new robot workforce […] the population has rocketed and unplanned cheap quick additions have been made to the skyline.”

[Click the image to see the original in bigger sizes on Flickr; all rights are reserved by Tavares, and the image is reproduced here under Fair Use terms. Please contact for immediate take-down if required.]

Southwyck House by Kibwe X-Kalibre Tavares

My first thought on seeing that? Kowloon Walled City. Dense urban populations lead inevitably to an increased density of marginal and/or interstitial regions…

Careless whispers

This just in: Chinese whispers happen on real-time social communications platforms just as they do in real life, only faster!

Here in the UK yesterday there was a brief Twitter panic about a non-existant shooting in London’s Oxford Circus, highlighting the problems inherent to the 24-hour global peer-to-peer news cycle: namely that when an erroneous signal gets out onto the network, it’ll probably propagate more quickly than the less senational truth of the matter. Cue lots of “bad Twitter!” punditry, which largely misses the point: this phenomenon isn’t new, it’s just a faster version of the good ol’ scuttlebutt. Some sensible thinking from GigaOM:

Traditional media have struggled with the issue as well, with newspapers often running corrections days or weeks after a mistake was made, with no real indication of what the actual error was. In a sense, Twitter is like a real-time, distributed version of a news-wire service such as Reuters or Associated Press; when those services post something that is wrong, they simply send out an update to their customers, and hope that no one has published it in the paper or online yet.

Twitter’s great strength is that it allows anyone to publish, and re-publish, information instantly, and distribute that information to thousands of people within minutes. But when a mistake gets distributed, there’s no single source that can send out a correction. That’s the double-edged sword such a network represents. Perhaps — since we all make up this real-time news network — it’s incumbent on all of us to do the correcting, even if it’s just by re-tweeting corrections and updates as eagerly as we re-tweeted the original.

Taking responsibility for our own contributions to the global conversation? What a controversial suggestion! Of course, the problem is that “nothing much happening in Oxford Circus after all” just isn’t as interesting a conversational nugget, and therefore doesn’t get passed on as quickly or frequently. (Compare and contrast with the old aphorism that good news doesn’t sell newspapers.)

Related to this is the rush-to-explain (and rush-to-blame) that follows a story, real or otherwise: see, for example, the instant dogpile of people pinning the blame for the Tucson tragedy on Sarah Palin*. Again, it’s an age-old process that’s been scaled up to global size and accelerated to the speed of electrons through wires, and I suspect that we’ll adjust to it eventually: like a teenager adjusting to his or her lengthening limbs, we’re bound to knock a few things over as we grow.

[ * In the name of pre-emptively deflecting my own dogpile, I think that the political rhetoric from all sides in the US has demonstrably contributed to escalating tensions, and I find Sarah Palin an utterly repugnant exploiter of ignorance, be it her own or other people’s. However, the rush to find her prints on the metaphorical pistolgrip was not only counterproductive (that sort of political fire thrives on the oxygen of martyrdom), but was also precisely the same sort of demonisation of ideological figureheads that the left accuses the right of relying on. The further apart ideologically the two polar positions appear to be, the more alike in character they seem to become… and while it might be possible to pin that problem on The New Media™, I don’t think it’ll stick. More depressing still were the countless articles decrying Palin’s “it’s all about me!” attitude to the tragedy, coming as they did in the wake of half the damned internet telling Palin it was all about her. C’mon, folks, work it out. ]

SpaceFence: The Movie

Via FlowingData, here’s a sort of promo-documentary-advertorial-edutainment spot for Lockheed Martin’s Space Fence system, designed to protect us from rogue bits of crap colliding in orbit above us.

As remarked at FD, I think it’s likely that a lot of the visualisations here are speculative, but the result is something that looks momentarily convincing in that ultimately-unconvincing-once-you’ve-thought-about-it Hollywood way – designed to sell the concept rather than the actuality, in other words. (In other words, I expect the Space Fence control room will look a lot less like the bridge of a space opera dreadnought… though there’s a part of me that wishes that wasn’t the case.)

Makes sense, really; if you want to convince people that putting in expensive systems to mitigate (or at least monitor) potential existential risk problems is worthwhile, making them look a bit sexy is a good tactic. I suppose this is a kind of design fiction, too…

[ Note: my assumption that the footage in the video partakes in artistic license is just that, an assumption; I would very much like to see the real thing, or evidence that the footage represents the reality. If anyone at Lockheed is reading, I’d love to drop in and take a closer look… though you’d probably have to stump up my airfare. 🙂 ]

Presenting the fact and fiction of tomorrow since 2001