Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour…
Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?
In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:
My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.
In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.
B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.
As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂
So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?