An old science fictional argument: to what extent is it correct to characterise the human mind as a digital computer? According to this insightful article [via Charles Stross] many AI researchers have been making an error in their belief that the human mind can be thought of as a computer:
The fact that the mind is a machine just as much as anything else in the universe is a machine tells us nothing interesting about the mind.
If the strong AI project is to be redefined as the task of duplicating the mind at a very low level, it may indeed prove possible—but the result will be something far short of the original goal of AI.
In other news:
A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains.
The “Blue Brain” has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory.
Is there a meaningful distinction between the traditional view of a strong AI and a molecular-level simulation of a human mind?
There is no useful distinction if you can simulate a human brain/mind in real time. You’ve just created an AI, effectively. This has been discussed in other place which I’ve read.
The difficulty lies in actually building the hardware to do it!
Significant difference in my book: strong AI should be able to interact, solve problems, engage in conversation and a simulation (no matter how fast the hardware it’s run on) will only be a simulation until we hook it up to interface devices and allow it to build a neural network. Look into what happens to kids who are isolated from sensory stimulation while growing up – it’s a trip how much input your brain needs to turn into the fantastic computing device it really is.
In fact, if you want to construct a sociopathic strong AI, that might just well be the way to do it: if you can run it in realtime with inputs and outputs, a set of limbs to get around with (spatial memory development) and then give the thing control over its own clockspeed so’s to get the much-vaunted speed-boost from simulating a brain at several factors of ten faster.
Although, unless you could figure out how to multi-thread consciousness, I don’t know how good it would be at interacting with the human world – thinking at a speed several times the speed of meat. Given that we seem to assume these days that strong AI will be modeled on the human brain, I don’t see how you’d go about handling the differences of speed in cognition between high-speed simulated brain and slow-speed meatbrains.
Look at the fiction in this field: in Iain Banks Feersum Ennjin, humans go into a torpor when they speed their conssciousness up to interact with “the dead” and other assorted simulated personalities. Likewise simulations can chose to slow themselves down to interact with the outside world.
His book The Algebraist features a lot of species shifting their speed of thought at will or with chemicals or mechanical assistance. A main human character specializes in slowing his metabolism down by a factor of 40 or some such to communicate with giant balloons inhabiting a gas giant that are far older than any of the Quick species that inhabit the rest of the galaxy.
In either case the final constraint is that consciousness cannot operate at more than one speed simultaneously!
One dramatic counter-example does spring to mind – that of Banks’ Drone and Ship Minds in the Culture books. They are infinitely capable, and yet infinitely alien. They do not think like humans do either in motivations or in timescale, and they run everything that humans don’t care to take care of themselves – which is about 99% of everything (you have to write yourself these sorts of ecological backdoors to “free lunch” territory if you’re going to explore post-scarcity social stuff). As far as I’ve ever understood from the books, they’re little quantum computers, or multidimensional constructs, or something bizarre.
My point is simple: we barely understand the mechanisms by which we go from unimprinted grey matter to massively parallel reality-parsing machines, and it is precisely the study of those developmental mechanisms that is driving both AI research and research into human thought. A working simulation of the brain is one thing, but a functional AI is a far cry away.
I imagine that it will be highly beneficial to the individual created in the AI development to run the simulated brain through the same gamut of inputs that a human child receives during its ontogenesis. Heartbeats. The subtle, nigh uninvestigated relationships between the hormonal changes in the mother and their effect on the child’s developing brain (a wide open field for any aspiring human biologists), how the young eyes development affects the development of the visual cortex, the motor cortex’s role in echoing things seen *and never experienced*, and so on and so forth.
But if there is one thing I wish you to take away from this brief screed it should be this: that there is a huge difference between a cell-level simulation of the brain running at normal speed and something that would pass for intelligent by common standards.