Tag Archives: existential threats

How I learned to stop worrying and love the Singularity

Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour

Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.

Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?

In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:

My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.

In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.

B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that  very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.

As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂

So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?

The end of geography

Dovetailing neatly with discussions of Wikileaks and Anonymous, here’s a piece at Prospect Magazine that reads the last rights rites for geography as the dominant shaper of human history [via BigThink]. The West won’t be the best forever, y’know:

The west dominates the world not because its people are biologically superior, its culture better, or its leaders wiser, but simply because of geography. When the world warmed up at the end of the last ice age, making farming possible, it was towards the western end of Eurasia that plants and animals were first domesticated. Proto-westerners were no smarter or harder working than anyone else; they just lived in the region where geography had put the densest concentrations of potentially domesticable plants and animals. Another 2,000 years would pass before domestication began in other parts of the world, where resources were less abundant. Holding onto their early lead, westerners went on to be the first to build cities, create states, and conquer empires. Non-westerners followed suit everywhere from Persia to Peru, but only after further time lags.

Yet the west’s head start in agriculture some 12,000 years ago does not tell us everything we need to know. While geography does explain history’s shape, it does not do so in a straightforward way. Geography determines how societies develop; but, simultaneously, how societies develop determines what geography means.

[…]

As can see from the past, while geography shapes the development of societies, development also shapes what geography means—and all the signs are that, in the 21st century, the meanings of geography are changing faster than ever. Geography is, we might even say, losing meaning. The world is shrinking, and the greatest challenges we face—nuclear weapons, climate change, mass migration, epidemics, food and water shortages—are all global problems. Perhaps the real lesson of history, then, is that by the time the west is no longer the best, the question may have ceased to matter very much.

Amen. It’d be nice if we could get past our current stage of global socialisation, which might be best compared to a group of people sat in a leaking boat arguing over who should do the most bailing.