Tag Archives: artificial-intelligence

IBM brain simulations reach cat equivalency

Artist's speculative interpretation of IBM's cat cortex simulationYou can’t so much as turn sideways without stumbling over this story, especially in the transhumanist and Singularitarian neighbourhoods of the web, and with good reason. So let’s just cut straight to the meat of it:

Scientists, at IBM Research – Almaden, in collaboration with colleagues from Lawrence Berkeley National Lab, have performed the first near real-time cortical simulation of the brain that exceeds the scale of a cat cortex and contains 1 billion spiking neurons and 10 trillion individual learning synapses. [via KurzweilAI]

(And I’ll tell you, much as I love the web and the young crazy companies that throng through it, no one writes a press release like the guys from IBM.)

Additionally, in collaboration with researchers from Stanford University, IBM scientists have developed an algorithm that exploits the Blue Gene® supercomputing architecture in order to noninvasively measure and map the connections between all cortical and sub-cortical locations within the human brain using magnetic resonance diffusion weighted imaging. Mapping the wiring diagram of the brain is crucial to untangling its vast communication network and understanding how it represents and processes information.

These advancements will provide a unique workbench for exploring the computational dynamics of the brain, and stand to move the team closer to its goal of building a compact, low-power synaptronic chip using nanotechnology and advances in phase change memory and magnetic tunnel junctions. The team’s work stands to break the mold of conventional von Neumann computing, in order to meet the system requirements of the instrumented and interconnected world of tomorrow.

All the pomp and majesty of the best publicity material, but still somehow stately, dignified. You’ll have to forgive me, but I see a whole lot of press releases on a daily basis, and when I see one this well crafted, I just have to sit back and admire it (or envy it) for a moment.

But delivery systems aside, what’s the story here? Basically, IBM have built a computer that simulates the complexity and interconnection of a cat’s brain, which is significantly more complex than previous neuro-cortical simulations. Why does that matter? Well, because for those who theorise that the human mind is an entirely emergent property of the brain (that there’s no such thing as a soul or spirit, in other words) the ability to simulate the hardware that the mind runs on should provide us the ability to simulate the mind itself. And once we can simulate it, we can probably record, transfer, tweak and tamper with it. Think human-level artificial intelligence; think Technological Singularity predicated on a point where intelligent machine become intelligent enough to redesign themselves. Think brain uploading, Moravec cyborg bodies, a panoply of simulated virtual universes… think hard and crazy science fictional stuff, in other words.

Of course, there’s no certainty that any of these things will result from IBM’s simulated cat brain, but it’s another proof-of-concept step along that road. Now all they have to do is keep the BlueGene computer from chasing dust motes in sunbeams and taking a nap every time they want to run some tests. [image by avatar-1]

NEW FICTION: SPIDER’S MOON by Lavie Tidhar

Almost every short fiction venue worth its salt will have some sort of guidelines as to what sort of material they’re looking for… but I suspect almost every editor will confess that, when the story is good enough, the guidelines can flex a little to allow it through.

That’s exactly what happened with “Spider’s Moon” by globe-trotting star-ascendant Lavie Tidhar, which is set in a slightly deeper future than we usually deal with here at Futurismic. But its core concerns are closer to home, and it’s a strong tale well told – so we’re proud to be publishing it for you to read. Enjoy!

Spider’s Moon

By Lavie Tidhar

Night, a full spider’s moon in the sky; hundreds of lanterns hung along the river, and the smell of saffron and garlic and dried lemongrass filled the air; a warm night, candles burning on street corners with offerings of rum and cooked rice, the hum of electric motorbikes, the murmur of a sugarcane machine as it crushed stalks to make the juice.

Ice tinkling in glasses; on small plastic chairs people sat by the river, drinking, talking. A hushed reverie, yet festive. Hoi An under the spider’s moon, French backpackers singing, badly but with enthusiasm, while one of their number played a guitar.

Save me from the raven and the frog, and show me safely to the river’s mouth, O Naga, he thought. Frogs had never been his favourites. Green and slimy, and always too loud. Like rats, almost. Like green, belligerent rats. Continue reading NEW FICTION: SPIDER’S MOON by Lavie Tidhar

Singularity lacking in motivation

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]

The dangerous dream of artificial intelligence

Robots will inherit the Earth!There are plenty of artificial intelligence skeptics out there, but few of them would go so far as to say that AI is a dangerous dream leading us down the road to dystopia. One such dissenting voice is former AI evangelist and robotics boffin Noel Sharkey, who pops up at New Scientist to explain his viewpoint:

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

NS: And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

Now that’s some proper science fictional thinking… although I’m more inclined to a middle ground wherein AI – should we ever achieve it, of course – comes with benefits as well as bad sides. As always, it’s down to us to determine which way the double-edged blade of technology cuts. [image by frumbert]

The ethics of autonomous devices

heart_surgeonThe Royal Academy of Engineering in the UK says that the imminent rise of autonomous and semi-autonomous cars, robotic surgeons, planes, war machines, software agents, and public transport systems raises important ethical and legal questions:

Professor Stewart and report co-author Chris Elliott remain convinced that autonomous systems will prove, on average, to be better surgeons and better lorry drivers than humans are.

But when they are not, it could lead to a legal morass, they said.

“If a robot surgeon is actually better than a human one, most times you’re going to be better off with a robot surgeon,” Dr Elliott said. “But occasionally it might do something that a human being would never be so stupid as to do.”

Professor Stewart concluded: “It is fundamentally a big issue that we think the public ought to think through before we start trying to imprison a truck.”

And when and if true AI or artificial general human-level intelligences show up, will they commit crimes, and if so, who will be responsible?

[from the BBC][image from Wonderlane on flickr]