Tag Archives: transhumanism

Six reasons why mind uploading (probably) ain’t gonna happen

Via R U Sirius at the recently-resuscitated H+ Magazine, SyFy*’s Dvice blog lists six reasons that the uploading of human minds in the classic sf-nal civilisational Singularity scenario is extremely unlikely to become a reality.

The sixth is the one most likely to make a good story in its own right, because it’s the one that deals with human nature more than biological or technological restraints:

6. Who Gets Uploaded?

And you thought the lines for iPhone 4 were bad… even if all the above problems were magically solved, there’s still human nature to contend with. War and conflict may not technically be hardwired into our species, but the past 10,000 years of human history are hard to argue with. Unless there’s a way to instantly “teleport” the entirety of humanity into the cloud simultaneously, you can bet your digitized ass that there’ll be fighting over who goes first (or doesn’t, or shouldn’t), how long it takes, what it costs, who pays, how long they get to stay there… you know, all the standard crap that humans have been busting each other’s chops about ever since we could stand upright. I’ll opt out, thanks.

Remember that store worker who was fatally squashed in a Black Friday sales scrum at WalMart back in 2008? Like that, only featuring the whole species. I consider myself something of a transhumanist fellow-traveller, but it’s this end of the problem spectrum (much more so than the technological hurdles) that nudges me ever closer to skepticism.

[ * Every time I read that “revamped” name, it looks more stupid than it did before. ]

The transhuman victory is assured!

Well, possibly not – but Michael Anissimov has a post provocatively titled “Transhumanism Has Already Won”, which argues that most of the central tenets of the movement (if such a fractious meme can fairly be called a movement at all) are already accepted – and in some cases actively desired – by a large portion of the world’s population:

Billions of people around the world would love to upgrade their bodies, extend their youth, and amplify their powers of perception, thought, and action with the assistance of safe and tested technologies. The urge to be something more, to go beyond, is the norm rather than the exception.

[…]

The mainstream has embraced transhumanism. A movie about using a brain-computer interface to become what is essentially a transhuman being, Avatar, is the highest-grossing box office hit of all time, pulling in $2.7 billion. This movie was made with hard-core science fiction enthusiasts in mind. About them, James Cameron said, “If I can just get ‘em in the damn theater, the film will act on them in the way it’s supposed to, in terms of taking them on an amazing journey and giving them this rich emotional experience.” A solid SL2 film, becoming the world’s #1 film of all time? It would be hard for the world to give transhumanism a firmer endorsement than that.

I’m not sure how solid an argument the success of an h+-themed movie is in this context, to be honest – though I’ll concede that entertainment media are powerful vectors for new ideas to enter mainstream discourse, even if their portrayal is essentially superficial.

But there’s more, which sees Anissimov explicitly repudiating the elitist devil-take-the-hindmost attitude that tends to be assumed (sometimes erroneously) as the transhumanist default:

When people write an article about a problem, it’s usually because they have a ready-made answer they want to sell you. But sometimes the universe just gives us a problem and it has no special obligation to give us an answer. Transhumanity is like that. Whatever answer we come up with may be a little messy, but we have to come up with something, because otherwise the future will play out according to considerations besides global security and harmony. Power asymmetry is not an optional part of the future — it is a mandatory one. There will be entities way more powerful than human. Where will they be born? How will they be made? These questions are not entirely hypothetical — the seeds of their creation are among us now. We have to decide how we want to give birth to the next influx of intelligent species on Earth. The birth of transhumanity will mean the death of humanity, if we are not careful.

Will it be possible for us to keep a sufficiently watchful eye on the privileged and powerful in order to stop them leaving us in the wake of their ascension? Difficult or not (and assuming transhumanism isn’t an unattainable omega point after all, which is another debate entirely), it’s got to be a better option than trying to ban or legislate around the problem.

Who wants to live forever?

OK, here’s a deceptively simple debate to start the week off with – if physical immortality was available to you, would you take it? Arguing the case against is Annalee “io9” Newitz, and here’s Jason Stoddard playing earnest devil’s advocate for the longevity lobby.

I have no ethical issues with human immortality, but I’m not sure it appeals to me personally; I’ve long believed that mortality is the only thing that has truly motivated humans to create things greater than themselves, and as such I kind of like the knowledge that I’ve only got so long to get stuff done. That said, every year that passes sees my faith in that idea becoming more shaky…

So, what’s your choice – to go gentle into that good night, or to burn the candle at both ends forever?

Artificial Flight – Dresden Codak spoofs AI skepticism

Aaron Diaz - self-portraitDresden Codak is one of my favourite webcomics; its creator, Aaron Diaz, is a staunch transhumanist, but rather than soapboxing directly he embeds his philosophical interests into his creative work. This occasionally spills over into brief satirical ripostes against anti-transhumanist naysayers; long-term followers may remember 2007’s “Enough is Enough – A Thinking Ape’s Critique of Trans-Simianism, which (justifiably) did the rounds of the transhumanist, science fictional and geek-affiliated blogo-wotsit at the time.

Well, here’s another one, Artificial Flight and Other Myths – a reasoned examination of A.F. by top birds“, which again takes the rhetorical gambit of reframing the AI argument outside of the human context:

We can start with a loose definition of flight.  While no two bird scientists or philosophers can agree on the specifics, there is still a common, intuitive understanding of what true flight is: powered, feathered locomotion through the air through the use of flapping wings.  While other flight-like phenomena exist in nature (via bats and insects), no bird with even a reasonable education would consider these creatures true fliers, as they lack one or more key elements. And, while some birds are unfortunately born handicapped (penguins, ostriches, etc.), they still possess the (albeit undeveloped) gene for flight, and it is indeed flight that defines the modern bird.

This is flight in the natural world, the product of millions of years of evolution, and not a phenomenon easily replicated.  Current A.F. is limited to unpowered gliding; a technical marvel, but nowhere near the sophistication of a bird.  Gliding simplifies our lives, and no bird (including myself) would discourage advancing this field, but it is a far cry from synthesizing the millions of cells within the wing alone to achieve Strong A.F. Strong A.F., as it is defined by researchers, is any artificial flier that is capable of passing the Tern Test (developed by A.F. pioneer Alan Tern), which involves convincing an average bird that the artificial flier is in fact a flying bird.

Diaz highlights the problem with anthropomorphic thinking as applied to definitions of intelligence, which is a common refrain from artificial intelligence advocates. Serendipitously enough, yesterday also saw Michael Anissimov point to a Singularity Institute document titled “Beyond Anthropomorphism”, which may be of interest if you want the argument fleshed out for you:

Anthropomorphic (“human-shaped”) thinking is the curse of futurists.  One of the continuing themes running through Creating Friendly AI is the attempt to track down specific features of human thought that are solely the property of humans rather than minds in general, especially if these features have, historically, been mistakenly attributed to AIs.

Anthropomorphic thinking is not just the result of context-insensitive generalization.  Anthropomorphism is the result of certain automatic assumptions that humans are evolved to make when dealing with other minds.  These built-in instincts will only produce accurate results for human minds; but since humans were the only intelligent beings present in the ancestral environment, our instincts sadly have no built-in delimiters.

Many personal philosophies, having been constructed in the presence of uniquely human instincts and emotions, reinforce the built-in brainware with conscious reasoning.  This sometimes leads to difficulty in reasoning about AIs; someone who believes that romantic love is the meaning of life will immediately come up with all sorts of reasons why all AIs will necessarily exhibit romantic love as well.

It strikes me that the yes-or-no question of whether strong general artificial intelligence is possible is one of a very special type, namely a question which can only be definitively answered by achieving the “yes” result. (I’m pretty sure there’s a distinct rhetorical term for that sort of question, but my minimal bootstrapped philosophy education fails to provide it to me at the moment; feel free to help out in the comments.) In other words, the only way we’ll truly know whether we can build a GAI is by building it; until then, it’s all just dialogue.

IBM brain simulations reach cat equivalency

Artist's speculative interpretation of IBM's cat cortex simulationYou can’t so much as turn sideways without stumbling over this story, especially in the transhumanist and Singularitarian neighbourhoods of the web, and with good reason. So let’s just cut straight to the meat of it:

Scientists, at IBM Research – Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses. [via KurzweilAI]

(And I’ll tell you, much as I love the web and the young crazy companies that throng through it, no one writes a press release like the guys from IBM.)

Additionally, in collaboration with researchers from Stanford University, IBM scientists have developed an algorithm that exploits the Blue Gene® supercomputing architecture in order to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.

These advancements will provide a unique workbench for exploring the computational dynamics of the brain, and stand to move the team closer to its goal of building a compact, low-power synaptronic chip using nanotechnology and advances in phase change memory and magnetic tunnel junctions. The team’s work stands to break the mold of conventional von Neumann computing, in order to meet the system requirements of the instrumented and interconnected world of tomorrow.

All the pomp and majesty of the best publicity material, but still somehow stately, dignified. You’ll have to forgive me, but I see a whole lot of press releases on a daily basis, and when I see one this well crafted, I just have to sit back and admire it (or envy it) for a moment.

But delivery systems aside, what’s the story here? Basically, IBM have built a computer that simulates the complexity and interconnection of a cat’s brain, which is significantly more complex than previous neuro-cortical simulations. Why does that matter? Well, because for those who theorise that the human mind is an entirely emergent property of the brain (that there’s no such thing as a soul or spirit, in other words) the ability to simulate the hardware that the mind runs on should provide us the ability to simulate the mind itself. And once we can simulate it, we can probably record, transfer, tweak and tamper with it. Think human-level artificial intelligence; think Technological Singularity predicated on a point where intelligent machine become intelligent enough to redesign themselves. Think brain uploading, Moravec cyborg bodies, a panoply of simulated virtual universes… think hard and crazy science fictional stuff, in other words.

Of course, there’s no certainty that any of these things will result from IBM’s simulated cat brain, but it’s another proof-of-concept step along that road. Now all they have to do is keep the BlueGene computer from chasing dust motes in sunbeams and taking a nap every time they want to run some tests. [image by avatar-1]