Tag Archives: reverse engineering

[$mind]!=[$computer]: why uploading your brain probably won’t happen

Via Science Not Fiction, here’s one Timothy B Lee taking down that cornerstone of Singularitarianism, the uploading of minds to digital substrates. How can we hope to reverse-engineer something that wasn’t engineered in the first place?

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.

As discussed before, I rather think that mind simulation – much like its related discipline, general artificial intelligence – is one of those things whose possibility will only be resolved by its achievement (or lack thereof). Which, come to think of it, might explain the somewhat theological flavour of the discourse around it…

Singularity slapfight: yet more Kurzweil vs. Myers

In the interests of following up on my earlier post about PZ Myers’ take-down of Ray Kurzweil’s claims about reverse engineering the human brain, and of displaying a lack of bias (I really don’t have a horse in this race, but I still enjoy watching them run, if that makes any sense), here’s some aftermath linkage.

Kurzweil himself responds [via SentientDevelopments]:

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

Al Fin declares that neither Kurzweil or Myers understand the brain [via AcceleratingFuture]:

But is that clear fact of mutual brain ignorance relevant to the underlying issue — Kurzweil’s claim that science will be able to “reverse-engineer” the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument — which can be summed up by Myers’ claim that Kurzweil is a “kook.”

But Kurzweil’s amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a “kook” is apparently considered clever in the intellectual circles which Mr. Myers’ and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Zing! Now, back to Myers:

In short, here’s Kurzweil’s claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it’s basic principles of operation completely within a few decades. My counterargument, which he hasn’t addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we’ve only scratched the surface in the last several decades of research, 3) “exponential” is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we’ll be ‘reverse engineering’ the human brain. He’s now at least clearly stating that decoding the genome does not generate the necessary information — it’s just an argument that the brain isn’t as complex as we thought, which I’ve already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don’t see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.

And, a little later still, after linking to some (fairly insubstantial) snark:

There are other, perhaps somewhat more serious, rebuttals at Rennie’s Last Nerve and A Fistful of Science.

Now run along, little obsessive Kurzweilians, there are many other blogs out there that regard your hero with derision, demanding your earnestly clueless rebuttals.

Smacks a little of “this is beneath me”, doesn’t it… or possibly even “can’t win, won’t fight”. Maybe I’m being unfair to Myers, but he’s certainly never backed off this easily when it comes to atheism and Darwin, and just a few days ago he was full of piss and vinegar. (Which isn’t to say I think he’s definitely wrong, of course; just that I expected a rather more determined attack…. not to mention less ad hominem and othering from someone who – quite rightfully – deplores such tactics when used by his usual opponents.)

Finally, George Dvorsky has a sort of condensed and sensationalism-free roadmap for AI from reverse engineering of the brain:

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

[…]

Inevitably the question as to ‘when’ crops up. Personally, I could care less. I’m more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

That’s pretty much why Dvorsky is one of my main go-to sources for transhumanist commentary; he’s one of the few self-identified members of the movement (of those that I’ve discovered, at least) who’s honest enough to admit when he doesn’t know something for certain.

I suspect that with Myers’ withdrawal from the field, that’s probably the end of this round. But as I said before, the greater intellectual battle is yet to be fought out, and this is probably just one early ideological skirmish.

Be sure to stock up on popcorn. 😉