Machines That Think

Welcome to the inaugural column of Today’s Tomorrows here at Futurismic. For any readers who missed my introduction, I’m going to explore a science topic a month, with both some evaluation of current news on the topic and a chat about how it has been dealt with in science fiction.

A few days ago, I was at a futurist technology conference called FiRE in San Diego, listening to new developments in multiple fields. The speed of change right now is amazing. We first flew at all in 1903. Today, we have a space program that ranges from commercial ventures like Space-X to NASA flying by Saturn and operating remote-control rovers on Mars. In 1993, the Mosaic internet browser allowed us popular and easy access to the computing tools to create cyberspace; I’m reading information from all over the world in order to compose this article. My iPhone has more computing power than the room-sized computer I used to support the City of Fullerton, CA.

As I was listening to more fast change topics (cloud computing, solar power, technology in Africa, etc.), it dawned on me that we may need artificial intelligences to help us keep track of technology and its consequences (good and bad). We have world sized problems, many caused or magnified by all this change: the oceans are in serious trouble, the atmosphere is challenged, cultures are clashing in nasty ways, and a sneeze in one part of our global economy travels around the world in a moment. We may not be intelligent enough to solve these problems without help. So that’s the long way round on this month’s topic: Machines that Think.

It’s a timely topic. As soon as I started researching, a Twitter search showed multiple links and retweets of an article a friend of mine also sent me a link to at the New York Times: The Coming Superbrain, by John Markoff. Markoff also deals with both science and science fiction, and since I did different things with the topic, you might want to read his work as well as mine.

About twenty years ago, I worked at Douglas Aircraft on a project designed to capture the knowledge of older engineers by building an “expert system.” The software we used was called Knowledge Engineering Environment, or KEE, and training included flying to Silicon Valley and taking classes at a development shop where we were impressed that staff had their own espresso machine. We failed to be able to usefully regurgitate any of the data we collected at all. However, since then both corporate perks and artificial intelligence research have expanded. A quick search for “artificial intelligence jobs” produces ads looking for experts in machine learning, data mining, and computational linguistics (computer talk). So what are these people with AI titles doing? It’s possible they are waking the world around us up in little bits:

None of these tools are going to pass the Turing test. The closest is, oddly, not Einstein, but the nav system. I yell at mine now; thankfully, it doesn’t yell back. But this new one may speak without being spoken to, and do handy things like recommend alternate routes.

Einstein was still a bit buggy. It had software that needed to be rebooted twice while I was trying to hold a conversation, and actually didn’t talk back, but simply emoted correctly – which is NOT a small task. Yay for that, Calit2 and Hanson Robotics. But it didn’t look or talk or walk like a man. The futurist in me now believes AI will be with us in a thousand small ways – as taken for granted as always-on email or mobile music – long before we approach most of the fictional treatments of AI. It will be in our cars, our phones, our alarm clocks and our lawn-mowing robots long before it tries to run the world.

I may not be holding the majority view. There are a number of brilliant AI pundits out there including Ray Kurzweil, Kevin Kelley, and Eliezer Yudkowsky who believe that thinking machines will be the transformative agent to a technological singularity, or a fork in the future road that we cannot see around. While I think they be right intellectually, I don’t see research results that bear them out at this point. That’s probably good, since we might end up with AIs that don’t pay us much attention. Luckily, these same people are advocates of planning ahead to avoid that outcome. Here’s a quote from a longer article by Eliezer: “[I] think that if we can handle the matter of AI at all, we should be able to create a mind that’s a far nicer person than anything evolution could have constructed.”

Yudkowsky’s article is part of a response to an apparently nasty fictional AI called Skynet that exists as part of the latest Terminator movie. Somewhat like the treatment that aliens get in film, AIs are generally presented as the bad boys with an Achilles heel we need to find so we can keep on living. There are more Matrix supercomputer type AIs on celluloid than thoughtful films like AI: Artificial Intelligence. But on paper, they are often given a better break. I have two novels that deal with the awakening of the Internet into a higher order of being to recommend.

The first is Technogenesis, by Syne Mitchell. This underappreciated novel is now out of print, but the very internet which is its subject will locate you a copy. Part adventure novel, part discussion of the many sides of AI, this is a worthy read where most of the science is at least believable. Yes, the AI in Technogenesis is the antihero, but not in the same relentless way that films tend to treat this kind of antagonist.

For a slightly easier to find current novel, you can look in the new science fiction shelves for Robert J. Sawyer’s www:Wake, a quick, entertaining, and thoroughly believable treatment of the internet awakening. Sawyer uses a young protagonist who learns to navigate the web through assistive devices, which proves to be an entry point for her to be the first to see the web in new ways. There is an excellent interview with Robert Sawyer on Tor.com.

Thanks for reading. I’d love to hear your thoughts, and see your recommendations for books and stories worth reading about AI.

#

Read more from Brenda Cooper at her website!

12 thoughts on “Machines That Think”

  1. Speaking as an AI student… commonly I believe there are two major pitfalls. The first is assigning traits to AI based on an anthropocentric standpoint – the Turing Test isn’t going to identify intelligent computers, or prove a useful bench mark. After all, having a human being sit and blip binary at a computer or scream into a modem would hardly prove or disprove human intelligence, would it? The second is turning it into religion – assuming that computational problems in AI will mysteriously resolve themselves when there’s enough computing power, or that solving computational problems will somehow lead to a better world. This is magical thinking, it’s no better than assuming one day a magical fairy will appear which will solve world hunger for us.

    One book representing non-anthropocentric thinking in AI is Kevin Warrick’s ‘In The Mind of the Machine’, and a rather interesting book about the challenges in developing AI is Steve Grand’s ‘Growing Up With Lucy’. Neither of these are science fiction, they’re both from researchers who are sometimes respected, sometimes not respected, but they’re people involved in the field, at least.

  2. Thanks for the long comment, Mark. I appreciate the pointers to non-fiction.
    And yes, any science turning into religion bothers me a bit – whether its AI or genetic engineering or cloning. These things are happening, and we need to be open to actual data and think and act on that. At least with my science hat on. The writer in me is happy to play more freely!

  3. Oh yeah! No, as a writer you definitely can. It’s just that AI is a very curious field – you can aim to do one thing, like model flocking behaviour, and inadvertantly start developing some excellent solutions for very complex computational problems out of what almost feels like the aether. This isn’t necessarily the objective truth of the thing, but it’s how it feels… and that’s probably where a lot of the problems I have with the perception of the field come into it. I’m a believer in strong AI, there’s some dogma in science for you, but I don’t think we’re going to get a ‘Singularity’ without putting more manpower into it than the manhattan project had, and possibly for decades.

    But as a _writer_… it opens up a lot of possibilities, as with any field that’s absolutely wide open. In AI we see possibilities without really understanding what the end point might be – this is very much the situation with planetary exploration in the first half of the 20th. However, like all your old-school John Carter of Mars type stories, these possibilities are often being used to write stories dealing with mind-body dualism and spiritualist tendancies, with AI relegated to the role of some kind of literal deus ex machina, or inevitable natural Gaia-spirit, rather than about the technical stuff. Kind of like how cybernetics – my other passion – is usually fixated as ‘science of body alteration’ in fiction, even though it’s basically a kind of systems/control theory.

    At some point I really need to sit down and write something about an AI that goes rogue in the fashion they do in reality – by doing something fantastic we don’t expect but still within their realm of capacity… like causing simple robots to run away from researchers and hide under desks. 😛

  4. Excellent article (as is The Coming Superbrain).

    Other required reading includes Ray Kurzweil (http://kurzweilai.net/) and Bill Joy’s “Why the future doesn’t need us” in Wired (http://www.wired.com/wired/archive/8.04/joy_pr.html).

    Unfortunately, as described in my own post, http://ramblingsonthefutureofhumanity.blogspot.com/2008/05/likely-coming-technological-singularity.html, I fall more into the Bill Joy camp.

    I’m afraid that we will not engineer the coming superbrain, rather we will find it. Artificial Intelligence is largely a software problem, an extremely difficult software problem that is currently far beyond our capabilities. But one aspect of AI is not only possible and real, it is an approach that does not require us to understand the intricacies of how it works: neural networks. Build a sufficiently large and fast neural network, teach it (by giving it goals and examples), and useful results happen without the programmer necessarily understanding why. This is the approach that can build machines that think, and that can exceed the capabilities of their creators.

    Luckily, the coming superbrain – machines that think – will need us. We possess the technological civilization that can support it, we have the hands that can build it and reproduce it.

    I foresee a future managed by machines that think, but realized by hands that build. We will be the enabling tools of our creations, the superbrains. We will likely thrive, much as our own living tools (livestock) and pets thrive by contributing to human health and happiness. But we won’t be in charge, and even should the Technological Singularity happen, I fear humanity will not participate.

    Unfortunately, it (or more likely, they) may not realize that it needs us.
    I don’t fear Skynet, as I don’t believe in malevolent AI, but I do fear we will be forgotten, and left behind. As Bill Joy suggests, the future doesn’t need us.

  5. Mark said: The second is turning it into religion – assuming that computational problems in AI will mysteriously resolve themselves when there’s enough computing power, or that solving computational problems will somehow lead to a better world. This is magical thinking, it’s no better than assuming one day a magical fairy will appear which will solve world hunger for us.

    Ah damn! 😉

    On a more general note, I wish people would move away from cuddly AI. They’re not going to be biological critters (at least not unless Peter Watts’s head-cheeses come to pass), riddled with hormones, feelings or even a sense of morality. Perhaps that is what makes them sinister, but I think they’d be rather like the down-to-Earth tools featured in Peter F. Hamilton’s Night’s Dawn trilogy (Still catching up. Yeah, I know I’m a slow reader!)

    Saying that, as a writer I give my fantasy a bit more leeway with a bunch of rather cantankerous and remote starship AIs. No cuddliness though 😉

  6. Hi Stephen,

    I agree that everyonse should read Bill Joy’s article. I heard him talk the same month it came out, and here’s my response article to that: http://www.futurist.com/archives/society-and-culture/joy-and-the-future.
    I’ve also heard Ray Kurzweil talk. Fascinating man, and a very persuasive writer.
    Part of why I’m a futurist and I spend so much time talking with people about the future (and writing columns like this) is because I agree with Bill Joy empahatically on the point that we need to talk about these technologies, and understand them (hence my happiness with Mark’s comments as a student – he’s undoubtedly past me on the details of these fields). Even if we do talk about the future and technology, we’re going to get something different than we foresaw. But we have a chance to influence the outcomes of things we understand.

  7. Interesting article. I’m particularly taken by the need to not transform the notion singularity into some kind of quasi-religious transcendental act of ascension. While a certain level of this kind of gnosticism (qua Erik Davis) is unavoidable in the relationship between humans and technology, commenter Mark rightly identifies the rather lazy metaphorical tendency to equate AI to a kind of superlative version of the liberal humanist bounded cogito-subject, and the singularity as the moment or transition within which a massive, immortal version of this understanding of subjectivity/mind is enshrined into a new form. In other words, there are huge problems with how we define both intelligence and selfhood *in humans ourselves* before we even get to lazily extending the same rather broken and inadequate enlightenment metaphors to technology; it’s as if AI becomes the culmination of the Enlightenment model of selfhood through its dominance of (as Mark points out) the dualistically-coded body in favour of an ascension of a quasi-spiritual and ‘pure’ mind. For a more cogent and persuasive discussion of this problem I highly recommend the outstanding book by N. Katherine Hayles, _How We Became Posthuman_ (1999).

    There are also troublingly persistent problems with the technologically deterministic flavour of these kinds of discourses, as if technology and progress just happens and that we poor humans must always struggle to keep up with it. Again, as Mark says above, we do co-evolve with our technologies, but there is almost always a huge swathe of socio-cultural and economic shifts that precede a significant, widespread technological one.

  8. While I first fell in with the AI idea from Robert Heinlein’s books, including and especially _The Moon is a Harsh Mistress_, I am currently of the belief that the coming AI – are going to be us. Humans, with their minds expanded and upgraded via biological and technological means.

  9. An excellent essay: although I’ve always been enthusiastic about the possibility of a human-equivalent AGI I agree that if/when it does happen it will be as a result of many incremental and emergent developments of specialised machines, rather than a single, coherent, directed effort.

    Also another point – we already have one kind of intelligence (human) – so why bother trying to reinvent the wheel? There is too much focus in the media on simply trying to recreate human minds in computers.

    As Mark says above there is an anthropic bias in popular thinking about AI – that they will just be scarier, nastier versions of human beings – in truth the real value of AI is where they think in ways that are fundamentally different to how humans think.

    In that spirit of nonhuman evolutionary emergence I direct thee to this
    splendid story at Edge.

  10. Wow – I didn’t know George Dyson wrote fiction! Very cool. Thanks, Tom.
    And yes, I agree with you and Anthony — biological change may be a big part. In fact, a lot of research on newer computing mediums suggests biological tools there, too. Maybe a parallel path where humans become more like machines and machines become more like humans? And no, I’m not implying they’d meet in the middle and fall in love….:)

  11. I still haven’t really understood the deeply rooted fear of AIs.
    Think about an AI running amok. At first sight it is a pretty scary thought.
    Now imagine a brain in a vat, running amok. Less scary and quite a bit more ridiculous. One is immediately prone to exclaim: “It’s just a brain in a jar for god’s sake! What can a brain do?”
    But isn’t the same true for an AI? Basically it also is just a brain in a jar.

    Which brings me to the main point of this post: In many of the catastrophic visions of AI we aren’t really told about actuators. Actuators are the things that enable robots to act, as the name suggest. In a vat-like AI one might call them the output channels. They have to be defined and set strict limits to what an AI can or cannot do.
    To take a human as an example: No matter how intelligent, we humans are limited to do the things that our bodies allow us to do. No matter how smart, we simply can’t suddenly start shooting laser beams from our eyeballs.
    In my experience that is often what happens with fictional AIs. They, metaphorically, start shooting laserbeams from their eyeballs, only because they have become smart.

    In some other catastrophic AI scenarios someone (usually “the government”) always has the idea to hook up the AI more or less directly to ICBM launch facilities or other doomsday machines. “Let our AI express itself by firing nuclear missiles” probably is not a smart idea, but that conclusion is relatively apparent to everyone (but “the government”).

    So my main question would be this: Isn’t the whole friendly AI issue massively overblown? Are our fears of AIs maybe just fuelled by faulty preconceptions about what AIs can and cannot do?

  12. Hi Wollff,

    Yes, I think we are way too afraid. We, after all, have the plug we can pull. In the short term, anyway. I think it is that we humans feel fragile.

    We’re also used to the status quo – a world where we at least think we can out-think anything. And so smart AI’s are scary.

    That said, working on friendly AI is not a bad idea. It’s kind of like getting off oil. Whether or not getting off oil will solve the global warming problems, there are problems it will solve.

    And thinking about the friendly-AI problems the way Eliezer does may act like a frame for our thinking the way books like 1984 have framed our thinking about certain things.

Comments are closed.