Tag Archives: AI

Artificial Flight – Dresden Codak spoofs AI skepticism

Aaron Diaz - self-portraitDresden Codak is one of my favourite webcomics; its creator, Aaron Diaz, is a staunch transhumanist, but rather than soapboxing directly he embeds his philosophical interests into his creative work. This occasionally spills over into brief satirical ripostes against anti-transhumanist naysayers; long-term followers may remember 2007’s “Enough is Enough – A Thinking Ape’s Critique of Trans-Simianism, which (justifiably) did the rounds of the transhumanist, science fictional and geek-affiliated blogo-wotsit at the time.

Well, here’s another one, Artificial Flight and Other Myths – a reasoned examination of A.F. by top birds“, which again takes the rhetorical gambit of reframing the AI argument outside of the human context:

We can start with a loose definition of flight.  While no two bird scientists or philosophers can agree on the specifics, there is still a common, intuitive understanding of what true flight is: powered, feathered locomotion through the air through the use of flapping wings.  While other flight-like phenomena exist in nature (via bats and insects), no bird with even a reasonable education would consider these creatures true fliers, as they lack one or more key elements. And, while some birds are unfortunately born handicapped (penguins, ostriches, etc.), they still possess the (albeit undeveloped) gene for flight, and it is indeed flight that defines the modern bird.

This is flight in the natural world, the product of millions of years of evolution, and not a phenomenon easily replicated.  Current A.F. is limited to unpowered gliding; a technical marvel, but nowhere near the sophistication of a bird.  Gliding simplifies our lives, and no bird (including myself) would discourage advancing this field, but it is a far cry from synthesizing the millions of cells within the wing alone to achieve Strong A.F. Strong A.F., as it is defined by researchers, is any artificial flier that is capable of passing the Tern Test (developed by A.F. pioneer Alan Tern), which involves convincing an average bird that the artificial flier is in fact a flying bird.

Diaz highlights the problem with anthropomorphic thinking as applied to definitions of intelligence, which is a common refrain from artificial intelligence advocates. Serendipitously enough, yesterday also saw Michael Anissimov point to a Singularity Institute document titled “Beyond Anthropomorphism”, which may be of interest if you want the argument fleshed out for you:

Anthropomorphic (“human-shaped”) thinking is the curse of futurists.  One of the continuing themes running through Creating Friendly AI is the attempt to track down specific features of human thought that are solely the property of humans rather than minds in general, especially if these features have, historically, been mistakenly attributed to AIs.

Anthropomorphic thinking is not just the result of context-insensitive generalization.  Anthropomorphism is the result of certain automatic assumptions that humans are evolved to make when dealing with other minds.  These built-in instincts will only produce accurate results for human minds; but since humans were the only intelligent beings present in the ancestral environment, our instincts sadly have no built-in delimiters.

Many personal philosophies, having been constructed in the presence of uniquely human instincts and emotions, reinforce the built-in brainware with conscious reasoning.  This sometimes leads to difficulty in reasoning about AIs; someone who believes that romantic love is the meaning of life will immediately come up with all sorts of reasons why all AIs will necessarily exhibit romantic love as well.

It strikes me that the yes-or-no question of whether strong general artificial intelligence is possible is one of a very special type, namely a question which can only be definitively answered by achieving the “yes” result. (I’m pretty sure there’s a distinct rhetorical term for that sort of question, but my minimal bootstrapped philosophy education fails to provide it to me at the moment; feel free to help out in the comments.) In other words, the only way we’ll truly know whether we can build a GAI is by building it; until then, it’s all just dialogue.

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?

NEW FICTION: SPIDER’S MOON by Lavie Tidhar

Almost every short fiction venue worth its salt will have some sort of guidelines as to what sort of material they’re looking for… but I suspect almost every editor will confess that, when the story is good enough, the guidelines can flex a little to allow it through.

That’s exactly what happened with “Spider’s Moon” by globe-trotting star-ascendant Lavie Tidhar, which is set in a slightly deeper future than we usually deal with here at Futurismic. But its core concerns are closer to home, and it’s a strong tale well told – so we’re proud to be publishing it for you to read. Enjoy!

Spider’s Moon

By Lavie Tidhar

Night, a full spider’s moon in the sky; hundreds of lanterns hung along the river, and the smell of saffron and garlic and dried lemongrass filled the air; a warm night, candles burning on street corners with offerings of rum and cooked rice, the hum of electric motorbikes, the murmur of a sugarcane machine as it crushed stalks to make the juice.

Ice tinkling in glasses; on small plastic chairs people sat by the river, drinking, talking. A hushed reverie, yet festive. Hoi An under the spider’s moon, French backpackers singing, badly but with enthusiasm, while one of their number played a guitar.

Save me from the raven and the frog, and show me safely to the river’s mouth, O Naga, he thought. Frogs had never been his favourites. Green and slimy, and always too loud. Like rats, almost. Like green, belligerent rats. Continue reading NEW FICTION: SPIDER’S MOON by Lavie Tidhar

The dangerous dream of artificial intelligence

Robots will inherit the Earth!There are plenty of artificial intelligence skeptics out there, but few of them would go so far as to say that AI is a dangerous dream leading us down the road to dystopia. One such dissenting voice is former AI evangelist and robotics boffin Noel Sharkey, who pops up at New Scientist to explain his viewpoint:

It is my contention that AI, and particularly robotics, exploits natural human zoomorphism. We want robots to appear like humans or animals, and this is assisted by cultural myths about AI and a willing suspension of disbelief. The old automata makers, going back as far as Hero of Alexandria, who made the first programmable robot in AD 60, saw their work as part of natural magic – the use of trick and illusion to make us believe their machines were alive. Modern robotics preserves this tradition with machines that can recognise emotion and manipulate silicone faces to show empathy. There are AI language programs that search databases to find conversationally appropriate sentences. If AI workers would accept the trickster role and be honest about it, we might progress a lot quicker.

NS: And you believe that there are dangers if we fool ourselves into believing the AI myth…

It is likely to accelerate our progress towards a dystopian world in which wars, policing and care of the vulnerable are carried out by technological artefacts that have no possibility of empathy, compassion or understanding.

Now that’s some proper science fictional thinking… although I’m more inclined to a middle ground wherein AI – should we ever achieve it, of course – comes with benefits as well as bad sides. As always, it’s down to us to determine which way the double-edged blade of technology cuts. [image by frumbert]

The ethics of autonomous devices

heart_surgeonThe Royal Academy of Engineering in the UK says that the imminent rise of autonomous and semi-autonomous cars, robotic surgeons, planes, war machines, software agents, and public transport systems raises important ethical and legal questions:

Professor Stewart and report co-author Chris Elliott remain convinced that autonomous systems will prove, on average, to be better surgeons and better lorry drivers than humans are.

But when they are not, it could lead to a legal morass, they said.

“If a robot surgeon is actually better than a human one, most times you’re going to be better off with a robot surgeon,” Dr Elliott said. “But occasionally it might do something that a human being would never be so stupid as to do.”

Professor Stewart concluded: “It is fundamentally a big issue that we think the public ought to think through before we start trying to imprison a truck.”

And when and if true AI or artificial general human-level intelligences show up, will they commit crimes, and if so, who will be responsible?

[from the BBC][image from Wonderlane on flickr]