Singularity lacking in motivation

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]

4 thoughts on “Singularity lacking in motivation”

  1. I love the singularians!
    Worrying about motivation, when they don’t even have the goal. Worrying that their AGI might one day just sputter out, when they have no idea how to even start the engine… Not even speaking of steering that vehicle in the right direction.
    The problem is less that the machine must pursue its goals, it must have a goal in the first place. So which is the goal you have to pursue in order to end up behaving intelligently on the way?
    The answer for humans is the collection of food, drink, and the raising of a family in a complex and hostile environment. After about four billion years something resembling intelligent behavior (from a human perspective) came out. That was the long way.
    Does anyone have an idea about the short way?
    I at least don’t.

  2. I’m willing to bet that the exponentially-increasing machine intelligence will be something like a far more complex version of our current computers; they are already capable of superhuman mental feats (doing things like rendering video, or instantly searching a massive database, for instance). It’s not like our current computers sit and do one thing repetitively like a mechanical machine; they perform an almost infinite variety of functions. The motivation for those functions is outsourced to the human brain. I’m not sure that the singularity requires something like “machine motivation”, I think that humans will do just fine supplying the motivation as we already do. (disclaimer: I am not a singularity expert, but this seems logical to me)

  3. so the argument is/was “we needed the cold war to get to the moon,” and that outside of that context going to the moon, and exploring space doesn’t seem like it’s that… useful? productive?

Comments are closed.