Tag Archives: rationality

Rationalising the promises of transhumanism

My brain is broken today (real ale festivals: not quite so fun the day after!), so I’m devoid of my usual scintillating wit and astute commentary on the big questions of the day*. So instead, go read this lucid call-to-arms to H+ advocates and cheerleaders from Mike Treder, managing director of the Institute for Ethics and Emerging Technologies [via Queering The Singularity].

Don’t you remember all those promises of decades past, that our awesome technologies soon would enable us to eliminate illness, to banish poverty, end aging, and control the weather? That everyone then would enjoy a world of abundance and opportunity? Don’t you realize that for people who are paying attention, this is déjà vu all over again?

No, it doesn’t work that way. It never has and it never will. Reality intrudes.

So let’s not continue to make the mistakes of the past. Let’s try to be a little smarter this time.

Instead of promoting exciting visions of a utopian future, we could shift our focus to discussions of how to better prepare for uncertain change, how to create sustainable and resilient human societies, how to live in better harmony with each other and with the rest of the natural world around us.

Shorter version: quit grandstanding, start talking to people about realistic risks and rewards. Hearts and minds.

I have a lot of sympathy with Treder’s thinking, there, but I’m not particularly optimistic about the chances of bringing rational foresight to the general population; if dumb populist soundbitery were that easily conquered, Glenn Beck would be flipping burgers for a living. But hey, that’s all the more reason to keep fighting, AMIRITES?

[ * Sorry, no refunds. ]

Science fiction, religion and rationality

As if to mirror the wider (and louder) debate of science versus religion (which I remain convinced is a false dichotomy in some respects), the science fiction scene seems to be turning its attention to the deeper philosophical underpinnings of the genre. Here are a couple of stimulating viewpoints: first of all, Ian Sales argues for science fiction as the last bastion of the rational in literature.

When Geoff Ryman founded the Mundane SF Movement in 2002, I saw it only as a bunch of sf writers throwing the best toys out of science fiction’s pram. When Jetse de Vries called for sf to be optimistic in 2008, I didn’t really understand as, to me, the genre was neither pessimistic nor optimistic.

But it occurred to me recently that these two attempts to change how science fiction thinks about itself are themselves symptomatic of the erosion of the scientific worldview in the public arena. By excluding the more fanciful, the more fantastical, tropes in sf, Mundane SF forces writers and readers to engage with known science and a scientific view of the world. And optimistic fiction, by focusing on “possible roads to a better tomorrow”, acknowledges that situations exist now which require solutions. It forces us to look at those situations, to examine the world and not rely on on a two-thousand-year-old fantasy novel, or the opinions of the scientifically-ignorant, for our worldview.

Meanwhile, over at Tor.com Teresa Jusino discusses the ways science fiction stories address the questions raised by religion:

What all of these stories do well with regard to religion (with the exception of The Phantom Menace, which did nothing well) is capture what I think the discussion should really be about. Most people who debate science vs. religion tend to ask the same boring question. Does God exist? Yawn. However, the question in all of these stories is never “Do these beings really exist?” The question is “What do we call them?” It’s never “Does this force actually exist?” It’s, “What do we call it?” Or “How do we treat it?” Or “How do we interact with it?” One of the many things that fascinates me about these stories is that the thing, whatever it is—a being, a force—always exists. Some choose to acknowledge it via gratitude, giving it a place of honor, organizing their lives around it and allowing it to feed them spiritually. Others simply use it as a thing, a tool, taking from it what they will when they will then calling it a day. But neither reaction negates the existence of the thing.

Good science fiction doesn’t concern itself with “Does God exist?”, but rather “What is God?” How do we define God?  Is God one being that created us? Is God a race of sentient alien beings that see all of time and space at once and is helping us evolve in ways we are too small to understand? Is God never-ending energy that is of itself? And why is it so important to human beings to define God at all?  To express gratitude to whatever God is? Why do people have the need to say “thank you” to something they can’t see and will probably never understand? To me, these are the important questions. They’re also the most interesting.

I’ve got a lot of time for Jusino’s arguments (despite my being an atheist), because her observations chime with my own: the stories that have stuck with me most strongly are those that project new ideas into the conceptual space between human consciousness and the universe in which that consciousness exists. One of the most interesting aspects of those questions is the way that the same evidence (or lack thereof) ends up being used as a confirmation of worldview by both sides of the fence; it all seems to boil down to whether you choose to see a “god in the gaps” or embrace the gaps as proof of the absence of a deity. Sure, there’s acres of philosophical battlefield between the two outlooks, but (as Jusino points out) there’s a lot more common ground than either side is keen to publicly admit.

That said, I’ve a lot of sympathy with Sales, too; the increasingly loud importunings of evangelicals, Biblical literalists, creationists and other cranks (not all of whose motivations or worldviews, it should be pointed out, are prompted primarily by religion) are doing visible damage to public discourse, not just in the States but worldwide. Jusino points out that there’s no necessary disconnect between believing in God and accepting the theory of evolution, and I’m convinced that the vast majority of people share that outlook; however, it seems to be those that don’t share it who shout loudest and longest.

So perhaps we do need more pulpits of rationality, more agitators for progress and foresight, more calm clear voices to balance the shrill and shrieking… and science fiction would seem ideally suited to such a purpose, if only because of its underlying philosophical roots; this is one of the reasons I consider myself a ‘fellow traveller’ with the Mundane and Optimistic SF movements. But I’m leery of prescriptivism, too; science fiction, like all art, should be allowed to find its own way through the individual journeys of its practitioners.

The sf scene’s ability and will to debate (through its fictional output, and in its public discourse) topics that many people find irrelevant or boring – racism, sexism, homophobia, religious intolerance, to name but a few – has always seemed to me to be its greatest strength; perhaps having the debate is, in some ways, more important than reaching a conclusion.

Singularity lacking in motivation

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]

Laughter and error-correction mechanisms

lightCarlo Strenger has written a good article on enlightenment values on Comment is Free:

…the Enlightenment has created an idea of immense importance: no human belief is above criticism, and no authority is infallible; no worldview can claim ultimate validity. Hence unbridled fanaticism is the ultimate human vice, responsible for more suffering than any other.

it applies to the ideas of the Enlightenment, too. They should not be above criticism, either. History shows that Enlightenment values can indeed be perverted into fanatical belief systems. Just think of the Dr Strangeloves of past US administrations who were willing to wipe humanity off the face of the earth in the name of freedom, and the – less dramatic but no less dismaying – tendency of the Cheneys and Rumsfelds of the GW Bush administration to trample human rights in the name of democracy.

As one of the commenters points out, the profound principle has been ignored by both 20th century secular ideologues, religious authorities, and more recent fanatics, is that of always bearing in mind the possibility you might be dead wrong.

The healthy human response to harmless error or misunderstanding is to have a laugh. Thus error is highlighted for all to see and forgiven by all parties. As Strenger puts it:

At its best, enlightenment creates the capacity for irony and a sense of humour; it enables us to look at all human forms of life from a vantage point of solidarity.

A further mistake on the part of humorless fanatics everywhere is to assume that there can ever be one, and only one, eternal truth. It may be that such a thing exists, but it is likely to be beyond our capacity to discern its true form from the vague shadows on the walls of our cave.

And so human beings are prone to error. There’s no problem with this, as failure teaches us more than success.

This notion was articulated by Karl Popper in the 20th century: it is the idea that you can never conclusively prove that an idea is correct, but conclusively disprove an incorrect idea.

And so human knowledge grows and the enterprise of civilization advances, one laughter-inducing blooper at a time.

[image from chantrybee on flickr]

Is dumping IQ a genius idea?

Albert EinsteinThe more we learn about the nature of our own intelligence, the more our definitions of it change… but we’re still fairly fixated on the old-fashioned IQ test as a metric for judging how smart someone is. [Einstein portrait courtesy Wikimedia Commons]

George Dvorsky reports on the ideas of one Keith E. Stanovich, who recommends we expand the concept of intelligence to encompass more functions than just number-crunching, spatial logic and the more recent addition of ’emotional intelligence’:

Stanovich suggests that IQ tests should be adjusted to focus on valuable qualities and capacities that are highly relevant to our daily lives. He argues that IQ tests would be far more effective if they took into account not only mental “brightness” but also rationality — including such abilities as “judicious decision making, efficient behavioral regulation, sensible goal prioritization … [and] the proper calibration of evidence.”

Sounds to me like we should start blanket testing for those latter traits at the doors of our seats of government…