There’s been a rash of coverage on Dr Markram and the IBM-supported Blue Brain project, one of the experiments designed to move us closer to creating a silicon simulation of the animal brain. Blue Brain is currently based on a silicon recreation of a slice of rat cortex, and Markram’s team have observed spontaneous emergent interaction between their artificial neurons which suggest to them that they’re on the right track… though not everyone is quite so sure.
“We’re building the brain from the bottom up, but in silicon,” says Dr. Markram, the leader of Blue Brain, which is powered by a supercomputer provided by International Business Machines Corp. “We want to understand how the brain learns, how it perceives things, how intelligence emerges.”
Blue Brain is controversial, and its success is far from assured. Christof Koch of the California Institute of Technology, a scientist who studies consciousness, says the Swiss project provides vital data about how part of the brain works. But he says that Dr. Markram’s approach is still missing algorithms, the biological programming that yields higher-level functions.
“You need to have a theory about how a particular circuit in the brain” can trigger complex, higher-order properties, Dr. Koch argues. “You can’t assemble ever larger data fields and shake it and say, ‘Ah, that’s how consciousness emerges.'”
The possibility of simulating consciousness by building a model of the brain is one of those frustrating quandaries that will seemingly only ever be answered by someone succeeding at doing it; the proof is quite literally in the pudding. Still, Markram is pretty convinced he’s on the right track, going so far as to announce in his TED talk that he’ll have built a model human brain within the next decade… which is something that AI researchers have been saying since the sixties, I believe. I’d love to see it happen, but you’ll forgive me if I don’t hold my breath or place any bets just yet.
Am I odd, or is it ok that true “artificial intelligence” or “brains” scare the bejeebers out of me?
A couple of things:
The human brain has 10 billion neurons. Each neuron has 10,000 connections to other neurons. That’s 10 to the 14th power. Here’s a very good one page view of the human brain:
http://www.willamette.edu/~gorr/classes/cs449/brain.html
Fundamentally, all computers use algorithms to retrieve, process and analyze data.
How in heaven’s name does Markram believe he can emulate a human brain in 10 years? What does Markram think the human brain does? There are a large number of serious mathematicians, neurologists and neurophilosophers, for example Roger Penrose, who do not believe that the human mind functions algorithmically. (See “The Emperor’s New Mind”.) Douglas Hofstadter’s explorations of minds, brains and consciousness also seems to point away from the algorithm model. Neither believe we can never understand the brain, but they, and most serious researchers in this field, feel that we are many decades away from having the knowledge and tools to build a thinking conscious brain.
Markram thinks he can build a model/brain as good as the human brain. In 2005 a group from UCSD and Fermi Institute in Italy, presented a paper on a model of the part of a lobster’s neural system which controls the pyloric valve. From what I can tell, this research started some time in the mid 1970’s. I understand that computing speed and fundamental understanding of algorithms have grown enormously since then. But, really?
I believe strongly in the work Markram and many others are doing, but hasn’t the whole AI process taught us anything?
– Devices that increasingly displace (demand for) human labor, far faster than (the majority of) slow humans can adapt or retrain … 5 years
– a brain the size of an office building with a complexity appreciably close to the human brain that does little else but talk in haphazard sentences, generate alarmist and zomfg science papers, makes invariably female Japanese-Caucasian looking androids do ballet dances under new age themed music … ten years.
– a deep and total penetration into the mainstream consciousness, widespread societal luddism and future shock, first transhumanist killed (either eliezer or mike annissimov) by a ‘relinqishist’ terrorist whose disheveled, homely face attains Che popularity for a few years, … 15 years.
– the first specialized androids that can mimic human functions in very narrow niche applications. Mass production of the same, mostly using pseudo-nanotechnological production methods we don’t have a proper term for in 2009 yet – extreme siege mentality, paranoia, existential terror by the global elites (people with A LOT to lose) and attempts to turn back the clock of freedom and democracy a good century back into fierce authoritarian dictatorships – significant unemployability of essentially the majority of human beings, total social and economic upheaval
… 20 years.
– ??? – “most people can afford to either shrug or are dead” …. 25 years
The problem with simulating consciousness is that the simulation would be conscious. It would be unethical to turn it off.