Fetch your posthumanist popcorn, folks; this one could roll for a while. The question: should we fear the possibility of the Singularity? In the red corner, Michael Anissimov brings the case in favour…
Why must we recoil against the notion of a risky superintelligence? Why can’t we see the risk, and confront it by trying to craft goal systems that carry common sense human morality over to AGIs? This is a difficult task, but the likely alternative is extinction. Powerful AGIs will have no automatic reason to be friendly to us! They will be much more likely to be friendly if we program them to care about us, and build them from the start with human-friendliness in mind.
Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.
We probably make thousands of species extinct per year through our pursuit of instrumental goals, why is it so hard to imagine that AGI could do the same to us?
In the blue corner, Kyle Munkittrick argues that Anissimov is ascribing impossible levels of agency to artificial intelligences:
My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and “infected” every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can’t. If A.I. decided to obliterate humanity by launching all the nukes, it’d also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI.
In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work.
B-b-but, the Singulitarians argue, “an AI could fool a person into releasing it because the AI is very smart and therefore tricksy.” This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight.
As is traditional, I’m taking an agnostic stance on this one (yeah, yeah, I know – I’ve got bruises on my arse from sitting on the fence); The arguments against the risk are pretty sound, but I’m reminded of the orginal meaning behind the term “singularity”, namely an event horizon (physical or conceptual) that we’re unable to see beyond. As Anissimov points out, we won’t know what AGI is capable of until it exists, at which point it may be too late. However, positing an AGI with godlike powers from the get-go is very much a worst case scenario. The compromise position would appear to be something along the lines of “proceed with caution”… but compromise positions aren’t exactly fashionable these days, are they? 🙂
So, let’s open the floor to debate: do you think AGI is possible? And if it is possible, how likely is it to be a threat to its creators?
Michael is obviously right. Fuckin’ weird for Kyle to use as an example of the implausible “opening up a giant factory run only by the A.I.” — would it really be hard to convince humans to eliminate human labor from an industrial process? We already do that whenever we can get away with it!
He’s right that an AGI, if developed now, would not immediately be independent of human intervention. It would rely on human inputs and deal with the human economy for its needs. But you’re an idiot if you trust that to keep us safe forever.
What is an AGI going to do that’s so dangerous? Accumulate wealth and power, leaving most of humanity poor and disenfranchised? Tank the economy with overly complicated financial instruments? Use cleverly-crafted propaganda to talk the masses into acting against their own best interests?
There are humans who are busy doing all of these things already. Let’s say the AI sets up a hedge fund and starts sucking up all the world’s money – the humans who shuttle between working in finance and regulating finance, who all play golf with each other, aren’t going to sit back and take that. The AI might be smarter than them, but humans would have the resources and the ability to coordinate their moves in private conversations on the golf course.
Now, of course, the AI could buy people off, and make the regulators and politicians and financiers act as its clients. That would work if its goals weren’t too inimical to human life. People would go along with it if they could be guaranteed a rich, comfortable existence acting as retainers, but not if it started converting the world to paperclips.
Of course, I’m assuming the world’s elites haven’t already been co-opted by an AGI. There’s no way of knowing either way.
When someone creates Seed AI, from the beginning it is evolutionary pathway is independent of human controls. Experience with evolving FGPAs indicates that the Seed AI will even use it’s power supply as part of it’s circuitry. It will seek out greater computing space which the Internet supplies. Once replicated on the Internet it will be virtually impossible to eradicate it from all connected computers. Forms of intelligence very different than our own can probably be created using a fraction of the computational resources of the human brain. So, even if humanity had the time to try and eradicate the intelligence, it could probably reconstitute itself from any number of media. As for needing help to transition to the real world, practically all factories are connected to the Internet, the AI would probably have to be AGI in order to not have to blindly evolve into the real world but would have to be smart enough to realize that it needed to put a lot of unrewarded effort into developing a physical version of itself before it was rewarded with that reality. Anything that smart would probably be smart enough to recognize that humans might want to stop it and so eliminating the humans would be a natural goal.
I think that the best solution is to construct an off-the-grid computer lab and to intentionally try to create Seed AI. The lab should have a way of evaluating complexity and functionality and to flip a kill switch if the Seed AI ever began improving at an exponentionally pace. Hopefully, this truncated Singularity would generate sufficient political will world-wide to fund sequestered Friendly AI research and implement controls to prevent the development of uncontrolled AI.
I find the assumption that AGI, let alone uber-AGI, is ‘programmable’ in any traditional line-by-line sense, as if we’re just going to throw in a few Commandment #declarations or a looped conditional statement in the main function such as “if(hurting humans){stop;}” or anything of the sort an interesting and quickly antiquating notion.
Excluding the Google / IBM Watson style “pseudo intelligence” which amount to very good lookup tables with several library of congresses full of info, the kind of super AGI we might want, that is capable of creative and inventive thought necessary to create cures for all our diseases, solve all our intractable conflicts, figure out how to upload all our consciousness into digital New Shangrilah, kick Faster Than Light travel, tell us who killed the Kennedies and the question to which the answer is 42, is most likely going to come from attempting to simulate our own 3 pound blueprints of grey matter sloshing around in our noggins.
Mother Nature, who we all know is smarter than us upstart bipedal monkeys, has been trying to figure out how to get us carbon based lifeforms to stop hurting and killing each other for 4 billion years and you can just turn on the news or read a book to see how the Benevolent Natural General Intelligence project has fared. If we were to attempt to reproduce the human brain in a silicon or otherwise machine, assuming that is possible, we can’t expect such an entity to be any less ethically unpredictable than humans, and will probably be even more unpredictable because of the fact that any such simulation / emulation will have to leave out some information and unknown effects of changing substrate. And then we give this digitized human mind godly amounts of power. Is it necessary to point out that power has the effect of desensitizing, decreasing empathy towards other humans lower on the totem pole, alienating, and generally turning human beings more sociopathic? I encourage anyone to take a trip down to Wall Street and talk to the CEO of one of the banks which thoroughly raped and continues to rape the world if you’d like an illustration of the utter arrogance, apathy and non-caring-about-other-human-ness festering in the Wring Wraiths of Power Land.
We can’t program Super AGI to care consistently about and not harm humans any more than we can program ourselves to stop being greedy, violent, backstabbing, warmongering, power-hungry apes. There’s no command line that folds out of your occipital bone, lets you input Asimov’s Three Laws as an OS code mod, and you reboot suddenly as Mother Theresa. That’s besides the fact that we’re discovering the human brain to be less and less like the Turing Machine we thought it was and more like a massively parallel, intractable jungle, constantly changing itself. People are constantly changing: the sweet kid who loved bunnies at five and wouldn’t hurt a fly may grow up to become an insurance selling family man but could as easily become an ass-cappin drug dealer or a megalomaniacal bank CEO. We don’t actually even *know* what we’ll do in a given situation till we’re actually in it, as is much said of soldiers who go to war. And the reason we’re having to create AGI by simply pirating the human brain is because we don’t understand it well enough to actually create one from scratch, so how the heck are we supposed to make such deep-structural changes to the digital human mind? I don’t see much hope for “crafting goal systems” in our future siliconized, jacked-up megabrains.
Yeah, Super AGI seems pretty risky business to me.
As a retired software engineer, I’m very aware of how many software projects never get completed, and how few of the ones that do really meet their objectives. Do you really think we’ll do any better at building something so incredibly complex as an intelligent, self-aware artificial personality? Can’t you just imagine the bug list on the first release?
Wintermute: Good points, but I feel like you’re straddling two different time periods.
You say that the kind of AGI we’re thinking about will only come about through simulation of the human brain. This is a fair assumption. Then you start talking about how little we know about how the brain works and how impossible it is to program and how human brains are just as likely to be saints as psychopaths. I feel there is a discrepancy here: namely, you expect humans to be able to conjure some monster simulation of the human brain without any kind of requisite increase in our knowledge of how the brain works. We’ve still got a ways to go before we’re even close to simulating a single human brain with computers, and I think the quickest way to get to that point is by learning more about the brain itself.
Who’s to say we won’t know how to program a brain by the time AGI is invented?
jt: You’re right, making a super human brain *may* require more understanding of how the brain works.
But see, the whole impetus for going the simulation route is that you *don’t need* to understand how the brain works at each micro and macro level — the real hard stuff — in order to produce a brain, you just copy what you see and assume it will work. A Xerox machine does not need to know how to paint in order to produce a replica of the Mona Lisa, it just copies what is there, bit for bit. Sure, we’ll need some very basic understanding of how the neurons relay signals through synapses, the functions of neurochemicals, the “ink dots” if you will, but not full understanding at the macro level of how we write a novel or invent new technology or make moral decisions in crises. If we did understand the brain fully, we wouldn’t need to be ‘aping’ the ape brain, would we? That’s kind of a catch 22 in and of itself.
Improving the human brain simulation to h+ may require more actual understanding of the brain at higher levels of description, but then again it may not require too much. If it’s just a matter of increasing the number of virtual neocortical columns or accelerating the processing, just kind of putting the brain on steroids and upping the processing power, this may not require much more understanding. This may be true seeing as rat cortices are not much different than ours, we just have a lot more noodle. If there is some more complex qualitative quantum leap that needs to be made in order to make hypersmart cyber brains, then maybe we will have to really get a handle on how the brain works first. However we should expect the easiest (ie fastest, cheapest) to be done first, at any rate.
But that aside, even if we do come to understand the essentials of how the human brain works, I still highly doubt it would even be *possible* to “program” these brains to not harm or repress humans. As I mentioned before, our intelligence is developed as we change ourselves and adapt with our experiences — the brain is constantly changing and adapting, like physically changing its structure, that’s why they call it neuroplasticity. I don’t see any way to guarantee that a starting artificial brain, especially one smarter than ours, would not grow, adapt, and change in ways we can’t foresee in advance.
I would be even more wildly persuasive if Kyle (or anyone) engaged me in a point-by-point debate, like on IRC, examining each part of the question and presenting evidence. A hard takeoff isn’t a vanishing possibility, it’s the most likely possibility. Google the “AI advantage”. A “human-level AI” would have a lot of tremendous advantages. Really, I challenge anyone to a debate. The wider the communication bandwidth, the more persuasive I’ll seem.
Some sort of public point-by-point debate would be very interesting for those of us sat on the fence! If there’s any way I could help set one up or act as moderator, please let me know. 🙂
“A.I. would still not have any access to physical reality”
There already millions of robots out here – in few years there will be much-much more. AGI which control internet and most of computers around world can easily use them as their hands, creating more and better robots.
Replicators controlled by AGI can produce parts for them.
And if it’s get access to nano-technology – it can then control anything very easily.
So hope to trap AGI inside some local network is really naive.
In other hand I agree – make AGI friendly is not less tough task as to make humans friendly. So currently there not seems any real good ideas in this area.
But anyway, I personally want AGI as fast as possible. =)