Tag Archives: Singularity

How I learned to stop worrying and love the Singularity

Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:

My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.

In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.

B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that  very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.

As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂

So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?

When the going gets weird, the weird turn pro

Things are getting real weird real fast. Did you hear about the Germans who insisted on the right to “opt out” of Google Street View and have their houses pixelated? Well, now they’re being targetted by pro-Google activism that consists of drive-by egg-raids and labels stuck to letterboxes proclaiming that “Google’s cool” [via TechDirt].

Double-U. Tee. Eff?

For the record, I think the folk opting out of Street View are misguided, and the egg-raiders are idiots; no advocacy in this post, I assure you. But think a moment on the high weirdness of this situation, about the mad wild flux of global culture that has made it possible. Just a decade ago, this would have been a gonzo near-future sf plot that any sane editor would have bounced for being charmingly implausible…

I’m sure this is the part where I’m supposed to wonder “how did we get here from there?”, but that’s the weirdest thing of all – I know exactly how we got here from there, because I’ve made a point of watching it unfold like a card-sharp’s prestidigitation, but I still can’t quite tell how the trick was done: it’s hopeful and baffling and wonderful and insane and terrifying all at once.

And things are likely to get weirder as the times get tougherI’m starting to think Brenda may have a point; the Singularity’s already started, it just doesn’t look anything like the shiny transcendent technotopia we thought it would be. Which shouldn’t be surprising, really… but it still is.

[ * And a posthumous hat-tip to the late Doctor Gonzo for the headline, who I resolutely believe would be taking a similar horrified joy – or perhaps a joyous horror, if there’s a difference – in the headlines of the moment. We’ve bought the ticket; now we’re taking the ride. ]

The Future is Now: the Recession and the Steep Upward Slope

It’s a recession.  The housing market is tough, the job market is worse, and the country is so sharply divided we’ll be lucky if anything useful happens in Washington D.C. in the next two years.  Whole economies are backpedaling into austerity programs.  This does not feel like a ride up the steep right-hand curve of the emerging technological singularity.  But I think that’s where we are – in that place of so much change we can barely keep up, and in a time when many people are falling so far behind that they will never catch up. Continue reading The Future is Now: the Recession and the Steep Upward Slope

Nerd rapture, redux: Annalee Newitz on why the Singularity ain’t gonna save us

Well, this should infuriate the usual suspects (and provoke more measured and considered responses from a few others). io9 ed-in-chief Annalee Newitz steps up to the plate to lay the smackdown on the Singularity as glorious transcendent happily-ever-after eschaton:

Though it’s easy to parody the poor guy who talked about potato chips after the Singularity, his faith is emblematic of Singulatarian beliefs. Many scientifically-minded people believe the Singularity is a time in the future when human civilization will be completely transformed by technologies, specifically A.I. and machines that can control matter at an atomic level (for a full definition of what I mean by the Singularity, read my backgrounder on it). The problem with this idea is that it’s a completely unrealistic view of how technology changes everyday life.

Case in point: Penicillin. Discovered because of advances in biology, and refined through advances in biotechnology, this drug cured many diseases that had been killing people for centuries. It was in every sense of the term a Singularity-level technology. And yet in the long term, it wound up leaving us just as vulnerable to disease. Bacteria mutated, creating nastier infections than we’ve ever seen before. Now we’re turning to pro-biotics rather than anti-biotics; we’re investigating gene therapies to surmount the troubles we’ve created by massively deploying penicillin and its derivatives.

That is how Singularity-level technologies work in real life. They solve dire problems, sure. They save lives. But they also create problems we’d never imagined – problems that might have been inconceivable before that Singularity tech was invented.

What I’m saying is that the potato chip won’t taste better after the Singularity because the future isn’t the present on steroids. The future is a mutated bacteria that you never saw coming.

Newitz’s point here, as I understand it, isn’t that technological leaps won’t occur; it’s that those leaps will come with the same sorts of baggage and side-effects that every other technological leap in history has carried with it. The more serious transhumanist commentators will doubtless make the point that they’ve been trying to curb this blue-sky tendency (and kudos to them for doing so), but they’re struggling against a very old human habit – namely the projection of utopian longing onto a future that’s assumed to be transformed by some more-than-human agency.

The more traditional agency of choice has been the local version of the godhead, but technology has usurped its place in the post-theistic classes of the developed world by glomming on to the same psychological yearnings… which is why the Ken MacLeod-coined “Rapture of the Nerds” dig is well-earned in many cases. The more blindly optimistic someone is about “the Singularity” solving all human problems in a blinding flash of transcendence, the less critical thought they tend to have given to what they’re talking about*; faith isn’t necessarily blind, but it has a definite tendency toward myopia, and theists hold no monopoly on that.

Newitz closes out with the following:

All I’m saying is that if you’re looking for a narrative that explains the future, consider this: Does the narrative promise you things that sound like religion? A world where today’s problems are fixed, but no new problems have arisen? A world where human history is irrelevant? If yes, then you’re in the fog of Singularity thinking.

But if that narrative deals with consequences, complications, and many possible outcomes, then you’re getting closer to something like a potential truth. It may not be as tasty as potato chips, but it’s what we’ve got. Might as well get ready for the mutation to begin.

Amen, sister. 🙂

[ * I fully include myself in this castigation; when I started writing for Futurismic, I was a naive and uncritical regurgitator of received wisdoms, though I like to think I’ve moved on somewhat since then. ]

NEW FICTION: IN PACMANDU by Lavie Tidhar

I’m very pleased to welcome globetrotting flyer-in-the-face-of-convention Lavie Tidhar back to the digital pages of Futurismic, and once again it’s with a story that stretches – or at least seems to stretch – our guidelines to breaking point, upsetting a few apple-carts full of sacred cows along the way. “In Pacmandu” is something a little out of the ordinary, even for us… and perhaps even (dare I say it?) for Lavie himself.

Are you ready? Then begin!

In Pacmandu

by Lavie Tidhar

  • GoA universe, Sigma Quadrant, Berezhinsky Planetoid, sys-ops command module

It has been two weeks since the disappearance of the Wu expedition.

We are gathered at the sys-ops command module of the Berezhinsky Planetoid, Sigma Quadrant of the Guilds of Ashkelon universe. The light is soft. Music plays unobtrusively in the background. Outside the windows it is snowing lines of code.

Present in the command module: myself, CodeDolphin, Sergei and Hong.

Our task –

‘Find out the fuck happened.’ Continue reading NEW FICTION: IN PACMANDU by Lavie Tidhar