Tag Archives: transhumanism

Cortical coprocessors: an outboard OS for the brain

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…

It’s a shame about Ray: Kurzweil not the only star in the Singularitarian firmament

George Dvorsky continues to take advantage of the recent famous-on-the-internet profile of the Kurzweil/Myers beef to bring lesser-discussed aspects of Singularitarianism to the fore… and as someone with an active interest in the movement (not to mention as a science fiction reader), I think that’s a worthwhile thing to do. Like I’ve said before, as way-out as it may still seem to a lot of people, the Singularity is an important concept in our wired world, even if viewed only with the utmost cynicism as a form of eschatological philosophy or techno-cult (which I think is to sell it more than a little short).

So here’s Dvorsky’s non-comprehensive list of notable Singularitarian thinkerswhich includes one well-known sf writer, Vernor Vinge, and one person (that I know of, at least) who has been tuckerized as a posthuman ‘species’ in science fiction literature: Hans Moravec, who gave his name to the moravecs of Dan Simmons’ Ilium, an excellent (if challenging and very hefty) novel.

Dvorsky invites suggestions of other thinkers worthy of attention in the fields of Singularity thinking and artificial intelligence, and I’ll extend the same invitation – feel free to include critics and naysayers, provided they tackle the issues with rigour.

And while we’re on the subject, you may or may not already know that PZ Myers has been called in for some serious heart surgery. Just in case it wasn’t already plain: despite not necessarily agreeing with him on matters recently discussed (and sniping at the tone taken), I bear the man no malice, and wish him a speedy recovery. Best of luck, Professor Myers.

Singularity slapfight: yet more Kurzweil vs. Myers

In the interests of following up on my earlier post about PZ Myers’ take-down of Ray Kurzweil’s claims about reverse engineering the human brain, and of displaying a lack of bias (I really don’t have a horse in this race, but I still enjoy watching them run, if that makes any sense), here’s some aftermath linkage.

Kurzweil himself responds [via SentientDevelopments]:

Myers, who apparently based his second-hand comments on erroneous press reports (he wasn’t at my talk), goes on to claim that my thesis is that we will reverse-engineer the brain from the genome. This is not at all what I said in my presentation to the Singularity Summit. I explicitly said that our quest to understand the principles of operation of the brain is based on many types of studies — from detailed molecular studies of individual neurons, to scans of neural connection patterns, to studies of the function of neural clusters, and many other approaches. I did not present studying the genome as even part of the strategy for reverse-engineering the brain.

Al Fin declares that neither Kurzweil or Myers understand the brain [via AcceleratingFuture]:

But is that clear fact of mutual brain ignorance relevant to the underlying issue — Kurzweil’s claim that science will be able to “reverse-engineer” the human brain within 20 years? In other words, Ray Kurzweil expects humans to build a brain-functional machine in the next 2 decades based largely upon concepts learned from studying how brains/minds think.

Clearly Kurzweil is not claiming that he will be able to understand human brains down to the most intricate detail, nor is he claiming that his new machine brain will emulate the brain down to its cell signaling proteins, receptors, gene expression, and organelles. Myers seems to become a bit bogged down in the details of his own objections to his misconceptions of what Kurzweil is claiming, and loses the thread of his argument — which can be summed up by Myers’ claim that Kurzweil is a “kook.”

But Kurzweil’s amazing body of thought and invention testifies to the fact that Kurzweil is probably no more a kook than any other genius inventor/visionary. Calling someone a “kook” is apparently considered clever in the intellectual circles which Mr. Myers’ and the commenters on his blog travel, but in the thinking world such accusations provide too little information to be of much use.

Zing! Now, back to Myers:

In short, here’s Kurzweil’s claim: the brain is simpler than we think, and thanks to the accelerating rate of technological change, we will understand it’s basic principles of operation completely within a few decades. My counterargument, which he hasn’t addressed at all, is that 1) his argument for that simplicity is deeply flawed and irrelevant, 2) he has made no quantifiable argument about how much we know about the brain right now, and I argue that we’ve only scratched the surface in the last several decades of research, 3) “exponential” is not a magic word that solves all problems (if I put a penny in the bank today, it does not mean I will have a million dollars in my retirement fund in 20 years), and 4) Kurzweil has provided no explanation for how we’ll be ‘reverse engineering’ the human brain. He’s now at least clearly stating that decoding the genome does not generate the necessary information — it’s just an argument that the brain isn’t as complex as we thought, which I’ve already said is bogus — but left dangling is the question of methodology. I suggest that we need to have a combined strategy of digging into the brain from the perspectives of physiology, molecular biology, genetics, and development, and in all of those fields I see a long hard slog ahead. I also don’t see that noisemakers like Kurzweil, who know nothing of those fields, will be making any contribution at all.

And, a little later still, after linking to some (fairly insubstantial) snark:

There are other, perhaps somewhat more serious, rebuttals at Rennie’s Last Nerve and A Fistful of Science.

Now run along, little obsessive Kurzweilians, there are many other blogs out there that regard your hero with derision, demanding your earnestly clueless rebuttals.

Smacks a little of “this is beneath me”, doesn’t it… or possibly even “can’t win, won’t fight”. Maybe I’m being unfair to Myers, but he’s certainly never backed off this easily when it comes to atheism and Darwin, and just a few days ago he was full of piss and vinegar. (Which isn’t to say I think he’s definitely wrong, of course; just that I expected a rather more determined attack…. not to mention less ad hominem and othering from someone who – quite rightfully – deplores such tactics when used by his usual opponents.)

Finally, George Dvorsky has a sort of condensed and sensationalism-free roadmap for AI from reverse engineering of the brain:

While I believe that reverse engineering the human brain is the right approach, I admit that it’s not going to be easy. Nor is it going to be quick. This will be a multi-disciplinary endeavor that will require decades of data collection and the use of technologies that don’t exist yet. And importantly, success won’t come about all at once. This will be an incremental process in which individual developments will provide the foundation for overcoming the next conceptual hurdle.

[…]

Inevitably the question as to ‘when’ crops up. Personally, I could care less. I’m more interested in viability than timelines. But, if pressed for an answer, my feeling is that we are still quite a ways off. Kurzweil’s prediction of 2030 is uncomfortably short in my opinion; his analogies to the human genome project are unsatisfying. This is a project of much greater magnitude, not to mention that we’re still likely heading down some blind alleys.

My own feeling is that we’ll likely be able to emulate the human brain in about 50 to 75 years. I will admit that I’m pulling this figure out of my butt as I really have no idea. It’s more a feeling than a scientifically-backed estimate.

That’s pretty much why Dvorsky is one of my main go-to sources for transhumanist commentary; he’s one of the few self-identified members of the movement (of those that I’ve discovered, at least) who’s honest enough to admit when he doesn’t know something for certain.

I suspect that with Myers’ withdrawal from the field, that’s probably the end of this round. But as I said before, the greater intellectual battle is yet to be fought out, and this is probably just one early ideological skirmish.

Be sure to stock up on popcorn. 😉

Transhumanist science clash! Kurzweil vs. Myers

Say what you will about transhumanism, but one thing’s for certain: it really polarises opinion, and nowhere more so than in the halls of academia and scientific research. Observe: Wired/Gizmodo had a chat with Singularitarian-in-chief Ray Kurzweil, who restated his theory (considered unrealistically optimistic by some transhumanists) that we’ll be able to reverse-engineer the human brain and simulate it with computers within a decade or so.

Here’s how that math works, Kurzweil explains: The design of the brain is in the genome. The human genome has three billion base pairs or six billion bits, which is about 800 million bytes before compression, he says. Eliminating redundancies and applying loss-less compression, that information can be compressed into about 50 million bytes, according to Kurzweil.

About half of that is the brain, which comes down to 25 million bytes, or a million lines of code.

Now enter PZ Myers, prominent atheism advocate (I like to think of him as “Dawkins’ Bulldog”, though I’m not sure Dawkins really needs a bulldog in the way that Darwin did) and vigorous debunker of fringe science. Broad claims in the Kurzweil vein are like a red rag to Myers, especially on his home turf of genetic biology, and he’s not afraid of mixing in a little ad hominem disparagement with his rejoinders, either:

Kurzweil knows nothing about how the brain works. It’s design is not encoded in the genome: what’s in the genome is a collection of molecular tools wrapped up in bits of conditional logic, the regulatory part of the genome, that makes cells responsive to interactions with a complex environment. The brain unfolds during development, by means of essential cell:cell interactions, of which we understand only a tiny fraction. The end result is a brain that is much, much more than simply the sum of the nucleotides that encode a few thousand proteins. He has to simulate all of development from his codebase in order to generate a brain simulator, and he isn’t even aware of the magnitude of that problem.

[…]

To simplify it so a computer science guy can get it, Kurzweil has everything completely wrong. The genome is not the program; it’s the data. The program is the ontogeny of the organism, which is an emergent property of interactions between the regulatory components of the genome and the environment, which uses that data to build species-specific properties of the organism. He doesn’t even comprehend the nature of the problem, and here he is pontificating on magic solutions completely free of facts and reason.

Now, I’m not taking sides here*; I don’t know enough computer science or evolutionary biology to cut into either interpretation. But a high-minded slapfight like this is always of interest, because it highlights just how seriously some very intelligent people take the issue. Kurzweil has more than a tinge of the evangelist about him, which is (I suspect) a large part of what bothers Myers about him, but there’s obviously something powerful about the idea (the meme?) of transhumanism/singularitarianism that he feels makes it worth fighting.

Ideas that get people arguing are important ideas. I consider myself a fellow traveller of transhumanism for this very reason; the ways we imagine tomorrow says a lot about where we are today, and vice versa. There’s a lot to learn by listening to both sides, I think.

[ * Yeah, yeah, I know, I’ve got marks on my ass from sitting on the fence. That’s just how I roll, baby; you want clenched-fist advocacy of anything but the right to think for yourself, you’re gonna need to read a different blog. ]

Arguments against life extension

Via Michael Anissimov, here’s a spectacularly empty diatribe against “deathhackers” by TechCrunch‘s Paul Carr. Carr objects to the idea of radical life extension as advocated by transhumanists, which is fair enough, but as written here most of his objections seem to boil down to personal distate toward those advocates. Ad hominem ahoy!

… go to any Silicon Valley party right now and you’ll find a scrawny huddle in the corner discussing the science of living forever…

[…]

Apart from rabid over-achieving, there’s another thing that unites all life-extension obsessives: they look like death. “Medievally thin and pale,” is how the Times (quoting Weiner’s book) describes [Aubrey] de Grey.

This just in: unattractive and/or geeky people interested in living longer. Film at eleven!

Amongst the ire and jealousy of “rabid over-achievers” (and a little bit of self-promotion, natch), Carr does have a point to make, namely that death is our greatest motivator:

What if the real reason these entrepreneurs have achieved so much is precisely because – more so than other mortals – they were born with a keen understanding they are working to a fixed (if unknown) deadline? It’s that fear of death that makes them succeed, not the other way around.

Regular readers will remember that this is an idea I have a great deal of personal sympathy with, though I’ve never suggested anyone else should be prevented from chasing immortality just because I’m not sure I’d want it for myself.

Anissimov also links to a rebuttal of Carr by Greg Fish, usually more of a gadfly against transhumanist tropes than a defender thereof:

Instead of telling entrepreneurs and angel investors who have a very real passion for science and technology to embrace their mortality, Carr should be encouraging them to pursue their lofty goals. Yes, ask them pointed questions, ask them to show you their thought process, and try to steer them from fantastic, pseudoscientific, or wishful thinking, but encourage their ideas because these people can take us to new places with the right support, motivation and a guiding hand from biologists, chemists, physicists, and hands-on researchers. No one has ever made a breakthrough by refusing to aim above mediocrity, and that’s why we shouldn’t be trying to promote the gospel of “eh, it’s good enough,” among those who love to think outside the box.

Let the dreamers dream, in other words; I’m down with that, pretty much.

But there’s a bit of serendipity here, as life extension is very much on my mind at the moment. I’ve been reading Getting To Know You, David Marusek’s first short story collection; if you’ve read Marusek in the short or long form, you’ll be aware of his imagined future where radical life extension is ubiquitous among the privileged, and where a servitor underclass of clones and artificial intelligences works for them to prop up the “boutique economies” that make such a world possible. The story “Cabbages and Kale, or: How We Downsized North America” neatly captures my own personal concern about life extension technology, namely that – like almost all technologies, at least at first – it will be the exclusive province of those who are already rich, politically powerful and long-lived.

By the by, this also dovetails with the Matt Ridley essay I linked to earlier today, in that Marusek’s answer to the economic problems of a functionally immortal power class is to have them restrict reproduction in order to keep the population at a level where the system still works: a voluntary stagnation, a rigged equilibrium. But the point I’m making here is this: technologies are never inherently bad, but the way the world works tends to gift their benefits to those who have the least need of them. We shouldn’t fear life extension, but fearing life extension held exclusively in the hands of the political classes is a very wise move indeed.

[ I very heartily recommend Marusek’s short stories and novels to Futurismic readers; not only is he a writer of great craft and skill, but he deals with the complex sociopolitical outcomes of technological ideas like life extension and nanotechnology which are, at present, little more than attractive possibilities lurking beyond the horizon. ]