Nerd rapture, redux: Annalee Newitz on why the Singularity ain’t gonna save us

Well, this should infuriate the usual suspects (and provoke more measured and considered responses from a few others). io9 ed-in-chief Annalee Newitz steps up to the plate to lay the smackdown on the Singularity as glorious transcendent happily-ever-after eschaton:

Though it’s easy to parody the poor guy who talked about potato chips after the Singularity, his faith is emblematic of Singulatarian beliefs. Many scientifically-minded people believe the Singularity is a time in the future when human civilization will be completely transformed by technologies, specifically A.I. and machines that can control matter at an atomic level (for a full definition of what I mean by the Singularity, read my backgrounder on it). The problem with this idea is that it’s a completely unrealistic view of how technology changes everyday life.

Case in point: Penicillin. Discovered because of advances in biology, and refined through advances in biotechnology, this drug cured many diseases that had been killing people for centuries. It was in every sense of the term a Singularity-level technology. And yet in the long term, it wound up leaving us just as vulnerable to disease. Bacteria mutated, creating nastier infections than we’ve ever seen before. Now we’re turning to pro-biotics rather than anti-biotics; we’re investigating gene therapies to surmount the troubles we’ve created by massively deploying penicillin and its derivatives.

That is how Singularity-level technologies work in real life. They solve dire problems, sure. They save lives. But they also create problems we’d never imagined – problems that might have been inconceivable before that Singularity tech was invented.

What I’m saying is that the potato chip won’t taste better after the Singularity because the future isn’t the present on steroids. The future is a mutated bacteria that you never saw coming.

Newitz’s point here, as I understand it, isn’t that technological leaps won’t occur; it’s that those leaps will come with the same sorts of baggage and side-effects that every other technological leap in history has carried with it. The more serious transhumanist commentators will doubtless make the point that they’ve been trying to curb this blue-sky tendency (and kudos to them for doing so), but they’re struggling against a very old human habit – namely the projection of utopian longing onto a future that’s assumed to be transformed by some more-than-human agency.

The more traditional agency of choice has been the local version of the godhead, but technology has usurped its place in the post-theistic classes of the developed world by glomming on to the same psychological yearnings… which is why the Ken MacLeod-coined “Rapture of the Nerds” dig is well-earned in many cases. The more blindly optimistic someone is about “the Singularity” solving all human problems in a blinding flash of transcendence, the less critical thought they tend to have given to what they’re talking about*; faith isn’t necessarily blind, but it has a definite tendency toward myopia, and theists hold no monopoly on that.

Newitz closes out with the following:

All I’m saying is that if you’re looking for a narrative that explains the future, consider this: Does the narrative promise you things that sound like religion? A world where today’s problems are fixed, but no new problems have arisen? A world where human history is irrelevant? If yes, then you’re in the fog of Singularity thinking.

But if that narrative deals with consequences, complications, and many possible outcomes, then you’re getting closer to something like a potential truth. It may not be as tasty as potato chips, but it’s what we’ve got. Might as well get ready for the mutation to begin.

Amen, sister. 🙂

[ * I fully include myself in this castigation; when I started writing for Futurismic, I was a naive and uncritical regurgitator of received wisdoms, though I like to think I’ve moved on somewhat since then. ]

Phoenix Pick nominations, er, picked

Thanks very much to those of you who voted; we had a tie for second place, so to keep things fair and impartial from the editorial side I flipped a coin to decide between them. The Futurismic nominations for the Phoenix Pick Award are:

Best of luck to Sandra and Silvia! I’ll keep us updated as news arrives

To obey Asimov’s First Law effectively, we must first break it

In the labs of the University of Ljubljana, Slovenia, researchers are forcing machines to inflict discomfort on humans. But it’s all in a good cause, you see – in order to ensure that robots don’t harm humans by accident, you have to assess what level of harm is unacceptable.

Borut Povše […] has persuaded six male colleagues to let a powerful industrial robot repeatedly strike them on the arm, to assess human-robot pain thresholds.

It’s not because he thinks the first law of robotics is too constraining to be of any practical use, but rather to help future robots adhere to the rule. “Even robots designed to Asimov’s laws can collide with people. We are trying to make sure that when they do, the collision is not too powerful,” Povše says. “We are taking the first steps to defining the limits of the speed and acceleration of robots, and the ideal size and shape of the tools they use, so they can safely interact with humans.”

Povše and his colleagues borrowed a small production-line robot made by Japanese technology firm Epson and normally used for assembling systems such as coffee vending machines. They programmed the robot arm to move towards a point in mid-air already occupied by a volunteer’s outstretched forearm, so the robot would push the human out of the way. Each volunteer was struck 18 times at different impact energies, with the robot arm fitted with one of two tools – one blunt and round, and one sharper.

[…]

The team will continue their tests using an artificial human arm to model the physical effects of far more severe collisions. Ultimately, the idea is to cap the speed a robot should move at when it senses a nearby human, to avoid hurting them.

I can sympathise with what they’re trying to achieve here, but it strikes me (arf!) as a rather bizarre methodology. If I were more cynical than I am*, I might even suggest that this is something of a non-story dolled up to attract geek-demographic clickthrough…

… in which case, I guess it succeeded. Fie, but complicity weighs heavy upon me this day, my liege!

[ * Lucky I’m not cynical, eh? Eh? ]

Citizen Denton: New Yorker profiles Gawker founder

Offered without comment, and via sources too numerous to link, is this profile of Gawker Media blog-mogul Nick Denton at The New Yorker. It’s simply a fascinating character study in its own right, though you could read it as an insight to the sort of attitudes and drives you need to make a blog network a paying proposition in the flux-plagued churn of The New Media.

Through Gawker, Denton wages war on self-regard—or presumed self-regard, as his cast of mind is both abstract and deeply tribal, inclining him to sort nearly all people into one or another category that could be judged full of itself. There is a well-travelled image of Denton on the Web, in which he is wearing a tuxedo and tilting a wineglass to his lips. The image bothers him, because it suggests a level of comfort and formality in his presentation that doesn’t accord with his self-image. Denton is tall and rangy, and has a famously large head that sits precariously on a thin neck and narrow shoulders, leaving the impression of an evolved brain that is perhaps a little too conscious of its pedestrian context. He looks perpetually unshaven, with gray stubble complementing his close-cropped, receding hair, which he teases casually forward. He is someone who likes and knows how to have fun—“Nick has a fairly strong hedonic streak,” his friend Matt Wells, of the BBC, says—but who doesn’t wish to be seen enjoying himself overly. “Hypocrisy is the only modern sin,” he likes to say.

Intriguing, and full of storyable ideas and character traits. Go read.