I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.
We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.
This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.
This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.
But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.
And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?