Here is a fine exploration of the differences and similarities in the use of artificial intelligences in Philip K. Dick and William Gibson’s writing:
Turing, whose purpose is to prevent AIs from developing too far, mirror the bounty hunters in Androids — the sole purpose of each is to control and destroy rogue intelligences, although in both novels their roles are shown from very different perspectives. In Neuromancer Turing are genuinely afraid of AIs: “You have no care for your species,” one Turing agent says to Case, “for thousands of years men dreamed of pacts with demons”.
Both Do Androids Dream of Electric Sheep? and Neuromancer portray artificial intelligences as lacking in empathy, but in different ways and for different reasons.
But would a human equivalent AI necessarily be lacking in empathy? Are humans as empathetic as we’d like to believe?
[via this tweet from SciFi Rules][image from agroni on flickr]
Humans are not necessarily empathetic. But they are relatively bounded in their capabilities. Gibson’s early AIs were much more godlike than Dick’s — they could run worlds, if unchecked. Or they were more monotheistically godlike; perhaps Dick’s replicants are like the Greek gods, wreaking petty violence with relative impunity but not controlling the cosmos.
The topic of the danger posed by amoral, powerful AI is of course Eliezer Yudkowsky’s main cause celebre. The institution of “empathetic” (more importantly, moral) AI is the “Friendly AI” (FAI) project. He emphasizes that this a very difficult problem — to make AI that not only is empathetic but remains empathetic, prevents non-empathetic AI, and keeps its behavior in the incredibly narrow band acceptable to human values is a daunting (if seemingly necessary) task.
That Wintermute-Neuromancer has no real interest in ruling, and abandons his world like an absent god, leaving godlings to meddle in his place — a wonderful piece of pop theology, but not much of a lesson for the future, I think.
Not necessarily. Heinlein, as well as others, have posited that AI designed to serve humanity will necessarily be very human-like, in order to better work with us, since an intelligence we can’t communicate with wouldn’t be very useful, would it?