The dangerous dream of artificial intelligence

Robots will inherit the Earth!There are plenty of artificial intelligence skeptics out there, but few of them would go so far as to say that AI is a dangerous dream leading us down the road to dystopia. One such dissenting voice is former AI evangelist and robotics boffin Noel Sharkey, who pops up at New Scientist to explain his viewpoint:

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

NS: And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

Now that’s some proper science fictional thinking… although I’m more inclined to a middle ground wherein AI – should we ever achieve it, of course – comes with benefits as well as bad sides. As always, it’s down to us to determine which way the double-edged blade of technology cuts. [image by frumbert]

2 thoughts on “The dangerous dream of artificial intelligence”

  1. True AI would not be a top-down trick, but rather a bottom up development, almost certainly done by simulate evolutionary development.
    The evolutionary criteria would determine the end result indirectly.

    Encourage selection towards co-operation, information exchange and pattern recognition, and theres no reason at all why the AI you get at the end wouldnt be as empathic and as caring as humans. (with many human flaws too).
    The bigger questions in my mind is not the idea of machines caring less then us, but machines with our own lack of acceptance of things different too us. They might care for their own kind, but refuse to accept us as being as alive as them; Just as we will do the same unto them.

    The other issue is ethicaly the development of AI; If we become the gods of a virtual evolution….arnt we going to be responsible for millions of deaths? Somewhere between the start of things clearly not alive, and the Turing-test-passing end result lies a ethicaly questionable middleground.

  2. Thats funny. I just wrote this post on the OCE list (abridged)

    The whole topic of Artificial Intelligence has become a belief based reasoning, and for some reason that confuses me, … even some allegedly scientific trained ‘humanists’ will argue that AI simply isn’t possible, or it might be possible, soonest centuries from now.

    I personally work from the assumption that with current growth rates in computing some sort of “robustly versatile abstract problem solving device” is thinkable around 2020, likely no later than 2030 and next to
    unavoidable by 2040. And I don’t mean, a conveniently antropomorphic device either – we’ll have no tame C3POs ambling about the lab being polite and servile. But we are not likely to have preying extreme white shark-predisposition terminators either. My motto when it comes to H+ is – both dystopia and utopias are human power projection fantasies. Hence I expect xenotopias.

    What future we will inherit depends on who makes the device work – if its pentagon and people like that creating the AI, the result will be both contaminated by military design parameters, as well as (what I term) blind spot psychological belief systems in the creators – if a genius computing researcher at DARPA with a strong neoconservative and/or christian world view contributes significantly to AI I am all but certain his assumptions will reflect in very surreal ways in the end result, and unintentionally so.

    But the scary bit is the inescapable realizanation that in a very big room of possible “cognitions” all human and human like systems of problem solving are but a very small adaptive subset somewhere in the left corner of the room. If someone develops something that coagulates (evolves?) through purposeful iterative engineering, competition, hashing commercial parts (remember, the T800 runs on C), or using algorithms engineered by the singaporeans (with possible bugs and additional nonwestern memetic artefacts) all evolved in probably a virtual
    reality environment – you are likely to end up with with solutions to abstract problems that are maybe remotely equivalent to “imagination” or “self-awareness” or “wisdom” or “sapience” or “consciousness” … or “intelligence” … but can just as easily be called by another name.

    These devices may come into being dealing with the world in models that produce results where we can predict them .. somewhat… for a while … but fairly soon, if they start generating their own solutions to problems we cannot conceive of … our plastique modelling of what in fact they do, and what qualia they assemble to marker aspects of reality may in effect completely escape our understanding or appreciation. These things may see patterns we have no clue off. Maybe they will see types of causality evolved beings have been disevolved from witnessing..?

    SINCE I have a personal (and nontransferrable) conviction these things will at least start bubbling in the cauldrons of well-funded projects around the world in MY lifetime, I have a stake in anticipating on them, and shoving people “left and right” emphasizing that at least in my understanding humans are riddled with rather serious cognitive
    blind spots and humans may be very well incapable of making good estimates of what these things, as soon as they become “marketable” in some way.

    In other words – AI may end up doing magic. I am not talking harry potter and wands magic – But I am talking slackjawed observers muttering “that…that…that’s im..impossible..!”

    Evolution left out so many ‘mechanical’ features in animals, that emerged in only two centuries of industrial revolution, we can be damn sure that a cognitive revolution will also produce features or “states of problem analysis” that completely elude is. No animals have wheels. Cars move in ways no animals can. Likewise engineered minds will very likely to have the same qualities of transcending base biology. And since we have no realm of comparison we should tread very very cautious.

    And yes, these creations don’t even need to be very “smart” in the traditional human sense. But that doesn’t matter – the ability of an attack dog to sniff out and case a human for miles is magical in itself. The dog isn’t smart, and can be fooled, but a K9 trainer and his dog allow for the combination of human evolved smarts and algorithmic or holistic or whateveristic machine based smart.

    My interest within OCE is to emphasize the lateral and unexpected. I think I am good at that. The ideal end result will be to make sure society is aware of the blind spots (and that is a hard sell, telling people they may be stupid when in some form of competition with these creations) it has. In fact, I am very very much centered on removing
    competition out of the way entirely. We should never ever end up polarized in a us versus them struggle because SOMEONE will lose those struggles in years.

    Luddism is NO option in an AI-bootstrapped postindustrial revolution. You would be better off diving into a woodchipper, legs first.

    Never should we be forced to resort (“by people who know best”) to being seduced into butlerian jyhads – we can’t afford AI luddism, (and unaccountable AI black markets!!). My pet peeve is corporate AI. We will be in a world of hurt when select elites, of whatever sweet flagrance, have access to these things and “the unwashed masses” do not. The road to hell will be paved with empowering anyone (including me, or the nicest transhumanists in the world) with exclusive AI tools. My interest is in breaking open these technologies and stuff them into accountability, publicity, page wide adds in the NYT, on TV and ideally, in the garage of every nutter hacker or gangster. AI *must* be a team effort by all of humanity. Those who renege are likely to unwittingly choose a slow arduous death.

    SURE open source AI will be bad and it will cause a LOT of trouble. But personally – AI in the hands of a select enlightened priesthood – it will end in genocide and gigadeath. I have a serious distrust with emergent AI
    itself, but the combination of genetically racist, territorial and feudal primate (human) brains and AI tools is what we should ring alarm bells all over the gene pool.

Comments are closed.