Bruce Sterling sideswipes AI evangelism

Bruce Sterling’s keynote speech at the Webstock conference in New Zealand last month contained the usual high concentration of non-fic eyeball kicks, and is well worth a read if you’re at all interested in the culture of the web, modern economics and the near future.

As usual, there are loads of provocative little asides nestled in the narrative, and I was particularly taken by this backhander to the face of artificial intelligence advocates:

I really think it’s the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is “intelligence.” That is not intelligence. That is rules-based machine behavior. It’s code being executed. It’s a powerful thing, it’s a beautiful thing, but to call that “intelligence” is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.

There’s something sad and pathetic about it, like a lonely old woman whose only friends are her cats. “I had to leave my 14 million dollars to Fluffy because he loves me more than all those poor kids down at the hospital.”

This stuff we call “collective intelligence” has tremendous potential, but it’s not our friend — any more than the invisible hand of the narcotics market is our friend.

Zing! I think we can be certain that Sterling doesn’t subscribe to any of the three schools of Singularitarianism.

7 thoughts on “Bruce Sterling sideswipes AI evangelism”

  1. This is just generic and ill-informed. A dismissive “that’s not intelligence” without any indication of what he thinks intelligence is? Some commentators claim to think that human intelligence is not “algorithmic”, but those ideas are not winning by any means; in particular the “quantum consciousness” idea has suffered serious setbacks.

    To just pontificate that it is “dehumanizing” to think of intelligence in terms of algorithms is stupid. We might like to think that what we do is not “rules-based machine behavior,” but thinking doesn’t make it so!

  2. @Dave:

    The problem is that the behaviour of yer typical human being is qualitatively different from the behaviour of yer typical digital computer.

    Every age has thought of the brain in terms of the technology under development at the time, so Victorians thought of the brain in terms of mills and people in the 1940s thought in terms of telephone exchanges…

    And in the 1980s people thought in terms of computers.

    What I think Sterling means by “dehumanizing” is that working under the assumption that human beings are algorithmic and perfectly logical removes a crucial component of human nature (I don’t mean anything metaphysical; I mean very high-level “irrational” behaviour like faith or love).

    But what we do isn’t like rules-based machine behaviour. Our behaviour is qualitatively different to digital computers or algorithms.

  3. Bruce Sterling might be refering only to current desktop computers, with their serial architecture and lackluster programming. In this regard I would agree although I would also point out that the trends in computing all point in the direction of more brain like computers so this state of affairs may not last that long.

    However, if he is (as I guess) making a general swipe against the conceptual possibility of the creation of human level artificial intelligence (based on computation) then the burden of proof is on him.

    Why? Here is my argument:

    All the available evidence indicates that the universe is Turing computable (possibly with a little randomness added). If anyone can prove, or even find any evidence at all that there was a part of the universe (such as the human mind) that was not Turing computable that would be a huge revolution in physics more significant than any advance since Newton.

    And that’s the problem with any contention that AI is not possible. A scientific demonstration that AI is not possible would amount to such new physics as I just mentioned above. Without a scientific demonstration you would be left arguing that you could have something which passes every test you can devise for intelligence and yet you do not regard as being intelligent (likewise conscious etc.). This has the standard solipsistic problems. So the idea that AI is impossible (rather than just very very difficult) is mere wishful speculation and will remain so until some actual evidence to the contrary is presented.

    In summary although there is a common viewpoint that AI is impossible or that the question of its possibility is in doubt this is not the case. Indeed if AI were impossible this would be immensely surprising and would certainly result in a noble prize and a place in history besides Einstein, Darwin and Newton for the person who proved it.

    Of course this doesn’t address how hard it would be to create an artificial intelligence (I think is would be extremely hard but within the capacity of 21st century technologists).

  4. @Barnaby Dawson:

    I agree that it’s probably possible to create a synthetic human equivalent general intelligence. I just disagree with the idea that human thought is the same as rules based digital computation.

    I found this article written by Sterling hissel’ way back in 2004 on the subject of AI and the technological singularity:

    A singularity looks great in special f/x, but is there any substance in the idea? When Vinge first posed the problem, he was concerned that the imminent eruption in artificial intelligence would lead to ?bermenschen of unfathomable mental agility. More than a decade later, we still can’t say with any precision what intelligence is, much less how to build it. If you fail to define your terms, it is easy to divide by zero and predict infinite exponential evolution. Sure, computers might someday awaken into something resembling human consciousness, but we have no metrics to describe that awakening and thus no objective way to recognize it if it happens. How would you test a claim like that?

  5. I agree with Dawson’s simulationist argument, though I think he’s overstating the case somewhat: it is far from clear that the universe is Turing-computable. However, the weight of the evidence does suggest that the important processes of the brain can be discretely simulated without loss (of course there are prominent dissenters, see my previous comment, but they’re doing poorly right now).

    Tom, if you also agree, then I think your objection disappears. If love can be accurately modeled by a system of rules, then it is emphatically not more than “rules-based machine behavior.” The machine in question just happens to be made of meat, and the rules just happen to be complicated and obscure.

    I find Sterling’s continued harangues on this subject embarrassing. Sure, a lot of our human modes of thought don’t look or feel logical and rule-based. We’ve evolved a system of understanding behavior (call it animism) that understands complex systems as flexible ‘minds’ rather than deterministic objects — that’s why old religions have gods embodying rocks, trees, weather, etc. But that’s a hack, a limitation of our perceptions that, given the evidence, we should strive to overcome. Sterling instead chooses to celebrate the limits of his perceptions, like a religionist talking about how wonderful it is that he’s compelled to believe things with no rational basis, it’s “faith.” How sad.

  6. @Dave

    If love can be accurately modeled by a system of rules, then it is emphatically not more than “rules-based machine behavior.” The machine in question just happens to be made of meat, and the rules just happen to be complicated and obscure.

    Fair enough. Love can be modelled, but I believe that the rules governing love are impossible to compute using a single digital computer program.

    I’d say that love is an emergent property of a complex adaptive system. It could be possible to model love using a large number of digital computers, each maybe simulating the behaviour of one cell in the brain (for example), but I dispute that it will ever be possible to create an algorithmic, logical formula for love.

    So love can be modelled, just not in a single digital computer program, but rather as an emergent property of the interaction of many millions of simpler agents.

    What Sterling means when he says:

    “Sure, computers might someday awaken into something resembling human consciousness, but we have no metrics to describe that awakening and thus no objective way to recognize it if it happens. How would you test a claim like that?”

    is that we cannot at this time see any obvious way of clearly defining intelligence, love, or any other properties of human behaviour. The best we can come up with is “love-like behaviour.”

    Although human thought may be deterministic (a profound philosophical question in its own right) and is ultimately composed of (relatively) simple structures like molecules and cells, it does not necessarily follow that there is a an algorithm for intelligence.

  7. I agree with the following, though I am more of an optimist about it: “we cannot at this time see any obvious way of clearly defining intelligence, love, or [many] other properties of human behaviour.”

    However, I’m not sure this makes sense:

    So love can be modelled, just not in a single digital computer program, but rather as an emergent property of the interaction of many millions of simpler agents.

    Arbitrarily many simple agents can be (simultaneously, perfectly) modeled by an (idealized) computer, and (except for time / memory constraints) an idealized computer can be (perfectly) modeled by a single digital computer (program). I’m pretty sure this is perfectly uncontroversial among computer scientists! (Is my profession-envy showing?)

Comments are closed.