Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.
It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?
The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:
The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.
The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:
… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.
If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:
Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.
No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.
So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.
Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?
Thinking is as powerful a phenomenon as it is a narrow one. We have seen only one type of thinking, and when compared to what animals do it is near magical. We do not know whether it is a fluke, in the context of the cosmos or geological history and we do not know if its emergence was an accident, fate, design or unprecedented.
What I do suspect is this – evolution produces very constrained end results. Hence, an ’emergent’ intelligence resulting of hardware and software that can be quickly edited, upgraded, expanded or re-engineered will quickly produce, even with the original code, types of pattern recogition, creativity, self-awareness, tool use, problem solving and cognition we won’t even label as such. In other words, AI (even if pretty dumb by most standards) may very soon do mental acrobatics we have absolutely no words, no anticipatory ability or no way to understand or even evaluate. It descends to us from a realm completely not precedented in our minds, nor ever anticipated on by nature.
My prediction is complete and utter blindsiding. Synthetic minds will change everything in ways that will very quickly overwhelm us or will trigger fundamental and widespread panic. Worse, it may have access to layers of reality we can not and do not fathom, which may very well allow it resources we don’t even have suspect, which is even worse.
Hence, anticipate the end of mankind some time before 2100. If we make it, we lucked out.
I find what Khannea says very interesting, and I agree with much of it. But I don’t think that the the pessimistic tone of it is really justified (in the sense that we don’t have enough information, not that we should be optimistic). Actually, I suspect that contact and -perhaps- collaboration/coexistence with truly different intelligence(s) of a grade similar to ours could be the best way to evolve out of our pretty primitive collective behaviours. We are simply too immersed in our own kind of intelligence, and way of seeing the world, to become conscious of the limits of our approach. Unfortunately, animals cannot help much with this, as they are both evidently less powerful intellectually and organic to the same evolutionary process which produced us too. Worst of all, we probably *need* to change our ways soon if we want to avoid planetary-level catastrophe: we’re simply too powerful to be so primitively thinking.
However, not expecting contacts from alien intelligence very soon, our best option is to build our own aliens. And the only way to do that seems to be, at the moment, working on AI and robotics. Especially for AI, interesting things could come when intelligence emerges without coming from millions of years of having to use -and maintain- bodies; without having to compete to survive; and without having a hardwired need and fear of one’s own siblings.
Of course, we could find out after all that -due to the limits of our brains- we are not able to communicate and/or interact with radically different intelligence; or even, as Khannea says, to recognize it as such. In that case, maybe it’s a good idea if we leave the scene…
Ignoring the philosophical stuff and focusing directly on what would change the most leads me to what the post talks about…the lack of jobs.
This is already happening. Since 1995 the U.S. has doubled the amount it physically produces. Yet, factory workers continue to lose jobs. Yes, some of these are going overseas, but most of what can be shipped overseas has been. So, how can we keep producing more with less jobs? Robotics. Obviously, these aren’t AI’s yet. However, these primative robots, compared to AI’s, have already created a firestorm in the political scene. Of course, no one is blaming the robots (the real culprits), but China instead. What happens when 80% or more of the population is replaced by AI’s like the post suggests? Massive disruption.
To Chad: disruption isn’t necessarily a bad thing… if you pilot it towards some (good) objective, instead leaving it in the hands of the market. And, by the way: when you’re talking about intelligent systems (AI and/or autonomous robots), “philosophical stuff” is pretty much the core of the issue.