[$mind]!=[$computer]: why uploading your brain probably won’t happen

Paul Raven @ 18-01-2011

Via Science Not Fiction, here’s one Timothy B Lee taking down that cornerstone of Singularitarianism, the uploading of minds to digital substrates. How can we hope to reverse-engineer something that wasn’t engineered in the first place?

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.

As discussed before, I rather think that mind simulation – much like its related discipline, general artificial intelligence – is one of those things whose possibility will only be resolved by its achievement (or lack thereof). Which, come to think of it, might explain the somewhat theological flavour of the discourse around it…


Technology as brain peripherals

Paul Raven @ 15-12-2010

Via George Dvorsky, a philosophical push-back against that persistent “teh-intarwebz-be-makin-uz-stoopid” riff, as espoused by professional curmudgeon Nick Carr (among others)… and I’m awarding extra points to Professor Andy Clark at the New York Times not just for arguing that technological extension or enhancement of the mind is no different to repair or support of it, but for mentioning the lyrics to an old Pixies tune. Yes, I really am that easily swayed*.

There is no more reason, from the perspective of evolution or learning, to favor the use of a brain-only cognitive strategy than there is to favor the use of canny (but messy, complex, hard-to-understand) combinations of brain, body and world. Brains play a major role, of course. They are the locus of great plasticity and processing power, and will be the key to almost any form of cognitive success. But spare a thought for the many resources whose task-related bursts of activity take place elsewhere, not just in the physical motions of our hands and arms while reasoning, or in the muscles of the dancer or the sports star, but even outside the biological body — in the iPhones, BlackBerrys, laptops and organizers which transform and extend the reach of bare biological processing in so many ways. These blobs of less-celebrated activity may sometimes be best seen, myself and others have argued, as bio-external elements in an extended cognitive process: one that now criss-crosses the conventional boundaries of skin and skull.

One way to see this is to ask yourself how you would categorize the same work were it found to occur “in the head” as part of the neural processing of, say, an alien species. If you’d then have no hesitation in counting the activity as genuine (though non-conscious) cognitive activity, then perhaps it is only some kind of bio-envelope prejudice that stops you counting the same work, when reliably performed outside the head, as a genuine element in your own mental processing?

[…]

Many people I speak to are perfectly happy with the idea that an implanted piece of non-biological equipment, interfaced to the brain by some kind of directly wired connection, would count (assuming all went well) as providing material support for some of their own cognitive processing. Just as we embrace cochlear implants as genuine but non-biological elements in a sensory circuit, so we might embrace “silicon neurons” performing complex operations as elements in some future form of cognitive repair. But when the emphasis shifts from repair to extension, and from implants with wired interfacing to “explants” with wire-free communication, intuitions sometimes shift. That shift, I want to argue, is unjustified. If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too. And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links your brain to your notepad, BlackBerry or iPhone) must be acceptable too. What counts is the flow and alteration of information, not the medium through which it moves.

Lots of useful ideas in there for anyone working on a new cyborg manifesto, I reckon… and some interesting implications for the standard suite of human rights, once you start counting outboard hardware as part of the mind. (E.g. depriving someone of their handheld device becomes similar to blindfolding or other forms of sensory deprivation.)

[ * Not really. Well, actually, I dunno; you can try and convince me. Y’know, if you like. Whatever. Ooooh, LOLcats! ]


Cortical coprocessors: an outboard OS for the brain

Paul Raven @ 27-09-2010

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…


Friday philosophy: mind/body dualism

Paul Raven @ 27-08-2010

Thinking caps on, folks. Using a science fictional premise as a framing device, philosopher Daniel Dennett ponders the question “if your mind and body were separated, which one would be ‘you’?” [via MetaFilter]

“Yorick,” I said aloud to my brain, “you are my brain.  The rest of my body, seated in this chair, I dub ‘Hamlet.’”  So here we all are:  Yorick’s my brain, Hamlet’s my body, and I am Dennett.  Now, where am I?  And when I think “where am I?”, where’s that thought tokened?  Is it tokened in my brain, lounging about in the vat, or right here between my ears where it seems to be tokened?  Or nowhere?  Its temporal coordinates give me no trouble; must it not have spatial coordinates as well?  I began making a list of the alternatives.

It’s a seventies-vintage essay, so the frame plot is a bit hokey, but the philosophical conundrum still packs a mean punch. Don’t read it if you’ve got anything complicated you’re meant to think about for the rest of the day. 🙂


Singularity slapfight: yet more Kurzweil vs. Myers

Paul Raven @ 23-08-2010

In the interests of following up on my earlier post about PZ Myers’ take-down of Ray Kurzweil’s claims about reverse engineering the human brain, and of displaying a lack of bias (I really don’t have a horse in this race, but I still enjoy watching them run, if that makes any sense), here’s some aftermath linkage.

Kurzweil himself responds [via SentientDevelopments]:

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

Al Fin declares that neither Kurzweil or Myers understand the brain [via AcceleratingFuture]:

But is that clear fact of mutual brain ignorance relevant to the underlying issue — Kurzweil’s claim that science will be able to “reverse-engineer” the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument — which can be summed up by Myers’ claim that Kurzweil is a “kook.”

But Kurzweil’s amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a “kook” is apparently considered clever in the intellectual circles which Mr. Myers’ and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Zing! Now, back to Myers:

In short, here’s Kurzweil’s claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it’s basic principles of operation completely within a few decades. My counterargument, which he hasn’t addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we’ve only scratched the surface in the last several decades of research, 3) “exponential” is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we’ll be ‘reverse engineering’ the human brain. He’s now at least clearly stating that decoding the genome does not generate the necessary information — it’s just an argument that the brain isn’t as complex as we thought, which I’ve already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don’t see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.

And, a little later still, after linking to some (fairly insubstantial) snark:

There are other, perhaps somewhat more serious, rebuttals at Rennie’s Last Nerve and A Fistful of Science.

Now run along, little obsessive Kurzweilians, there are many other blogs out there that regard your hero with derision, demanding your earnestly clueless rebuttals.

Smacks a little of “this is beneath me”, doesn’t it… or possibly even “can’t win, won’t fight”. Maybe I’m being unfair to Myers, but he’s certainly never backed off this easily when it comes to atheism and Darwin, and just a few days ago he was full of piss and vinegar. (Which isn’t to say I think he’s definitely wrong, of course; just that I expected a rather more determined attack…. not to mention less ad hominem and othering from someone who – quite rightfully – deplores such tactics when used by his usual opponents.)

Finally, George Dvorsky has a sort of condensed and sensationalism-free roadmap for AI from reverse engineering of the brain:

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

[…]

Inevitably the question as to ‘when’ crops up. Personally, I could care less. I’m more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

That’s pretty much why Dvorsky is one of my main go-to sources for transhumanist commentary; he’s one of the few self-identified members of the movement (of those that I’ve discovered, at least) who’s honest enough to admit when he doesn’t know something for certain.

I suspect that with Myers’ withdrawal from the field, that’s probably the end of this round. But as I said before, the greater intellectual battle is yet to be fought out, and this is probably just one early ideological skirmish.

Be sure to stock up on popcorn. 😉


Next Page »