Science fiction and science, part II: smashing the crystal ball

Paul Raven @ 11-02-2013

So, last week saw me take the train down to London in order to give a presentation on science fiction narratives as strategic planning tools to the Strategic Special Interest Group of the British Academy of Management.

(That’s neither a topic or audience I’d have ever expected to address publicly, had you asked me eighteen months ago.)

It was an interesting day out; it’s always good to meet people from a sector of the world where you’ve never really trodden, and to find out how they look at things. It’s also nice to be able to talk to them on topics of great personal interest, and to exchange ideas. I think it went fairly well; some of the attendees had very complimentary things to say about my presentation, and given how nervous I was about giving it, I’m going to count that as a net victory.

Not everyone was satisfied, however. Also on the roster of speakers was veteran UK fan and fiction writer Geoff Nelder, who explained how he came to write his story “Auditory Crescendo”, a tech extrapolation piece in the classic sf mode based upon his own experiences with his hearing aids. His recounting of the day’s events takes me to task for the heinous sin of claiming science fiction cannot predict the future, though he has since suggested I may want to respond to his criticisms and clarify my standpoint.

And indeed I do – not only in response to his own criticisms, which are perfectly reasonable, albeit petulantly framed (I must have “thought it would be cool” to discredit sf’s predictive mojo, apparently, rather than, I dunno, actually getting up there and telling people what I sincerely believe), but because this is an issue that I increasingly feel lies at the heart of the imaginative/qualitative approach to foresight and futurism, and I think that lancing this particular boil (or at least stabbing fretfully at the buboe with a safety pin) might be a beneficial public exercise. As always, brickbats and other projectiles from the peanut gallery are very much encouraged, but (also as always), I’d ask you to please play nice.

#

To be fair, part of Mr Nelder’s confusion may be the result of me trying to pack a very large argument into a comparatively small space in front of an audience to whom it was merely a qualifying sidenote to the main event. Mr Nelder later quotes my assertion that science fiction narratives can be seen as sandboxes, as dev environments for ideas, and I’m glad he can see that value; that was the core point I wanted to make, after all. I remain politely baffled, however, that he and others are unable to see how easily that value eclipses the false promises of prolepsis, so I’m going to have a stab at expanding my position here.

The thrust of my argument was not that science fiction never appears to make predictions, but that a) science fiction’s ability to make predictions is vastly overestimated by its practitioners, boosters and fans, that b) sf’s predictions look a lot less like predictions when one examines the real-world roll-out and compares it to the supposed fictional blueprint, and that c) predictions are effectively useless, especially in the context of a strategic planning conference, because they can only be verified by the emergence of the thing they predict, by which time their supposed prolepsis is a moot point.

To unpack that a little, let’s take Mr Nelder’s position – that science fiction can indeed predict technologies and/or phenomena which have yet to exist – as a given, and ask a simple question by way of response: “so what?”

It is certainly possible to go through a list of things which appeared in the pages of sf mags or books before appearing in reality; depending on your criteria, I dare say you could amass quite a number of them, though that also applies to the collection of counterexamples. Semantically speaking, this is a sort of prediction, which Oxford Dictionaries define as “say[ing] or estimat[ing] that (a specified thing) will happen in the future or will be a consequence of something”.

The point I was trying to make during my presentation, however, is that these predictions are in no way reliable. One could argue the numbers endlessly depending on the criteria used, but I feel totally safe in saying that sf has made plenty of failed predictions alongside its successes, and that – much like any extended exercise in the statistics of chance – it probably averages out to a 50-50 right-wrong split over a legitimate sample of a size worth considering. But even assuming a more generous split in favour of the proleptic, the more serious problem still pertains: namely that the success of a prediction can only be determined at the moment when its utility as a prediction has expired.

Let’s unpack another level and look at different classes of prediction, of which I would suggest there are basically two. The first is the banal prediction, wherein I make a claim which, while theoretically capable of being refuted by a statistically unlikely turn of events, is already considered sufficiently certain that predicting it is pointless. I can predict the sun will come up tomorrow morning, but I’d be an idiot to expect a cookie and a glass of milk for being proved right, and no one’s going to make their fortune off the back of my soothsaying. (If you want to send cookies anyway, though, be my guest. I like cookies.)

Science fiction has made many banal predictions, many of which have indeed come to pass. The value science fiction adds to general discourse by making such predictions – if any – is to be found in its exploration of their potential consequences. To use an example, it’s pretty facile to say “hey, if trends in mortality and healthcare continue, there’s gonna be a lot more people on the planet!”, but there’s something far more useful in saying “hey, if trends in mortality and healthcare continue, and there’s a lot more people on the planet, what might we end up eating?”

The second class of prediction is the prediction of potential consequence: the prediction that, if proven right, could radically transform the fortunes and fates of one or many people. By definition, these predictions are not easily made; if they were easily made, they would be of no consequence. They are, essentially, guesses – educated and/or informed to a greater or lesser degree, perhaps, but still guesses, imaginings, not pages from a Delorean’d sports almanac. Sure, some of them end up being validated by the events that follow their making. Some of them don’t. Again, we could argue the toss on the numbers either side of that split until the heat-death of the universe, and it would be a sideshow irrelevancy for one very important reason: no one knows in advance whether or not a prediction of potential consequence will come true or not. Validation can only occur at the moment when the prediction ceases to possess any utility beyond being a conversation point.

Or, to put it another way: science fiction is about as good at making informed predictions about the future as any card-sharp. You can argue that sf makes predictions all the time, but unless you’ve got a pretty good rubric for working out a) which predictions are predictions of potential consequence, and b) which of those predictions of consequence will come true, then these “predictions” are worthless to anyone other than a gambler (or a hedge-fund investor, which is essentially the same animal in a far more expensive and tasteful suit).

Science fiction’s supposed predictive capabilities are absolutely useless to anyone subject to the normal causal structure of the universe, which is, um, everyone. OK, you can go through the sf canon and pick out prediction after uncanny prediction; people have made a very successful industry out of doing exactly the same thing with the prophecies of Nostradamus. Even a stopped clock tells the right time twice a day, especially if you choose the right moment to draw everyone’s attention to it.

But again, let’s concede Mr Nelder’s point, and reiterate my question: science fiction does sometimes predict the future. So what? What use is that knowledge to anyone other than a gambler? Even the gambler would shrug it off, I suspect; if science fiction had any sort of statistical history of making better predictions about the future than any other domain of human endeavour, Wall Street and the Square Mile would have long since quanted the crap out of it. Science fiction may predict the future, but its predictions are functionally useless. They express possibilities, and nothing more.

My second point is one that I dealt with during my presentation, namely that most of what we’re told were sf’s most successful predictions turn out to be anything but. I’ll concede that this was a slightly straw-mannish argument on my part, albeit one furnished with endless regiments of ready-made straw soldiery practically begging to be wrestled to the ground, but the point I was making was meant to tie back into my grand theme, which was the inescapable subjectivity of narrative. Mr Nelder points out that Arthur C Clarke didn’t invent the geostationary satellite out of thin air, but did so in the context of his day-job as a scientist, and by building on the work of other researchers before him; this is demonstrably true. But my point as made stands very clearly: a quick google of the relevant search terms provides countless articles, some from reputable establishments or organs, (re)making the (false) claim that ACC “invented” the geostationary satellite. If anything, Mr Nelder’s revealing of the true source of the idea actually serves to support my point, not knock it back; the geostationary satellite is demonstrably something that is widely and repeatedly claimed to have “been invented” or “predicted” by science fiction, when it very clearly wasn’t.

And as such I maintain it was a suitable example, because the point I was making was that the core of a “prediction” may end up manifesting in a context which substantially changes its function, meaning or import. Clarke’s basic conception of geostationary satellites was sound, and did indeed inform the development of satellite telecomms, but he conceived them as manned space stations; writing in 1945, Clarke assumed, as many of his contemporaries would have done, that space travel would soon be as trivial and affordable as air travel. As such, the “prediction” bears little relation to its realization beyond the basic conceptual level, and the realization of the idea was only made possible by adjusting it considerably to fit the real-world context in which it was eventually to be deployed.

Interestingly, one of Mr Nader’s counterexamples also does a good job of undermining his position further, namely the “prediction” of robots in Fritz Lang’s Metropolis. For a start, Lang’s gorgeous and groundbreaking movie was not the original text to coin the term; that honour falls to Karel Čapek’s play R.U.R. Furthermore, the programmable worker-automaton is a trope far, far older than either, and can be found in the mythologies of many earlier cultures. (The powerful have always dreamed of a working class who would never complain about work or slope off for a lunch-break, after all.) Can it still be a “prediction” if you’ve actually just updated a very old idea to fit your contemporary sociopolitical context? Is it still a prediction if your prediction is quite obviously and openly a metaphor for a social or political change mediated by technology, in this case the dehumanisation of labour?

(Although, in a way, you could say that Čapek and Lang got a lot closer to true prediction with R.U.R. and Metropolis than many other supposed sf “predictions”; their robots were a metaphor for the alienation and exploitation of the working class, and if you look at the panicked discussions around the economics of manufacture and automation in the news at the moment, you can see that they successfully went far beyond the simple claim that “one day machines will do all the work for us” by exploring the impact and implications of such a change on human society; it is the consequences of that change that they explore, not its likelihood. As I said in my presentation, an inventor or engineer is interested in what a technology does and how it does it; an artist is interested in what it means. It is the exploration of meaning and human impact – so amply demonstrated in Mr Nelder’s own story presented on the day, in fact – that science fiction does well, perhaps even uniquely well in certain domains. The prediction stuff? It’s a crap-shoot, and not even something unique to sf; any two-bit tech-pundit with their own blog can do it, and it’s no more or less effective.

And as I also said in my presentation (which may well be the bit that irked Mr Nelder so badly), and I quote verbatim: “anyone who claims they can reliably predict the future is a huckster with something to sell you, even if their product is only themselves”. I illustrated it with the following image.

The immortal Kurzweil

I stand by that statement absolutely.

So, there it is: if you really want to argue that sf can predict the future, I’ll concede your point, but I’d counterargue that the more time you spend stamping your foot and saying that “sf can so predict the future, just lookee here at these examples”, the more time you spend making sf look like a carney-booth thrillshow with massively overblown notions of its own purpose and utility. If we want people to take sf seriously for the useful things that it can demonstrably do – the qualitative and subjective exploration of possibilities and consequences, for instance – then we need to stop rattling on about the power of prediction as if it were something that could be harnessed in any rigorous and useful way whatsoever.

Which is why, when given the chance to talk to business strategists about what use narrative might be in their work, I started with the most important example of what use it isn’t, because I’m tired of being lumped in with shiny-suited consultants and SilVal Singularitarian woo-pedlars, the foremost and loudest proponents of the sf-as-prophecy meme.

Someone had to shoot the elephant in the room, and I fully intend to keep firing until the bloody thing dies.

#

My thanks to the British Academy of Management for having me along and giving me a little soapbox time, to Dr. Gary Graham for organising the whole shindig, and to all the other participants, Mr Nelder not least among them; it’s by having my ideas challenged that I get the chance to improve them.


Skyrim and the Quest for Meaning

Jonathan McCalmont @ 07-12-2011
  1. Lithium

I’m old enough to remember when video games were comparatively simple things. For example, I remember the side-scrolling video game adaptation of Robocop (1988). Relatively short, Robocop had you shooting and jumping your way from one side of the world to another. Once you got to the end of one world, you moved to another, and then another… and then the worlds started repeating themselves in slightly different colours. These games were simple to understand: you immediately knew what you were expected to do and what constituted victory. Nearly twenty-five years on, video game technology has advanced to the point where games are beginning to acquire the complex ambiguity of the real world — and with this complexity comes difficulty. Continue reading “Skyrim and the Quest for Meaning”


Got 99 metaproblems (but a lack of aspirational futurism ain’t one)

Paul Raven @ 29-07-2011

Good grief, but the RSS mountain really piles up in 24 hours, doesn’t it?

Well, mine does, anyway… which means it’s probably high time I had a spring-clean in there to make it more manageable. As well as maybe, y’know, stopping the habit of adding more feeds to the damned aggregator all the time. There’s too much interesting stuff (or grim stuff, or grimly interesting stuff) going on in the world, y’see; the temptation to stay on top of it all and let it just flow through my head like some sort of Zeitgeist/sewer-outflow hybrid is horribly compelling. I am the gauzy mesh in your perpetual flow of present history, plucking out interesting lumps of… no, actually, let’s stop that metaphor right there.

Anyways, long story short: had a busy few days and have more busyness ahead, so minimal commentary from me today. Instead, an exhortation to go and read stuff written by other folk far smarter than I. We’ll start with the manageably short piece, which is another Karl Schroeder joint at Chateau Stross (or should that be Schloss Stross?) where he talks about the difference between foresight futurism and “predicting the future”, and a new aspirational direction for his near-future science fiction output that is reminiscent of Jetse de Vries’ Optimistic SF manifesto:

… I’m pretty tired of all those, “Dude, where’s my flying car!” digs. There’s always been a certain brand of futurist who’s obsessed with getting it right: with racking up successful predictions like some modern-day Nostradamus. I’m sure you know who I’m talking about; some futurists play the prediction game very well, but in the end it is a game, and closer to charlatanism than it is to science. There’s actually no method for seeing the future, and nobody’s predictions are more reliable than anybody else’s.

You know, I think we do know who he’s talking about…

And while we’re thinking about the future, it’s hard to avoid thinking about problems, for – as a species and a planet – we have rather a lot of them right now. So many, in fact, that you might even say that reality itself is a failed state:

So maybe what we have today are not problems, but meta-problems.

It is very useful to confirm our understanding with others, to meet with fellow humans – preferably face-to-face – strength flows from this.

However, disquiet remains – no pre-catastrophic change of course seems in any way likely. What we might call ‘Fabian’ environmentalism has failed.

Occasionally a scientist will be so overcome with horror that he will make a radical public pronouncement – like the drunken uncle at a wedding, he may well be saying what everyone knows to be true, pulling the skeletons out of the family closet for all to see, but, well, it just doesn’t do to say that sort of thing out loud at a formal function.

This is all a little bit strange.

We understand the problems. We also, pretty much, understand the solutions. But their real-world application is a whole unpickable, integrated clusterfuck.

I believe part of the meta-problem is this: people no longer inhabit a single reality.

Collectively, there is no longer a single cultural arena of dialogue.

And we need to construct one. Go read the rest for the full lowdown. I’d love to be able to name the writer as something other than “Steelweaver”, but as he’s using a Tumblr with no About page or anything*, I am largely unable to do so. If you can fill in that datagap for me, please get in touch or leave a note in the comments.

[ * Note to writers of serious and/or interesting stuff on the intetubes: this is rather frustrating, and Tumblr really isn’t the best platform for this sort of stuff. Basically it’s the post-naivete ironic MySpace, optimised for collecting hipster aphorisms and reposting “art” shots that tend to contain boobs.

Just sayin’. ]


Drone Ethnography

Paul Raven @ 25-07-2011

Adam Rothstein has a knack for naming things of which we’re as yet only fleetingly aware of as cultural forces. Here he is guesting at Rhizome with a piece on drone ethnography:

Okay. I thought it was clear, but if you want me to spell it out for you, I will. You are obsessed with drones. We all are. We live in a drone culture, just as we once lived in a car culture. The Northrop-Grumman RQ-4 Global Hawk is your ’55 Chevorlet. You just might not know it yet.

I have thirty-five browser tabs open, and each contains a fragment of the drone-mythos. Each is a glimpse at a situation, a bird’s eye view of the terrain. So many channels, showing me the same thing: near-infinite data collection. With the help of Google, I’m drone-spotting—I’m turning a new critical perspective that I’m calling Drone Ethnography, back on itself.

All of us that use the internet are already practicing Drone Ethnography. Look at the features of drone technology: Unmanned Aerial Vehicles (UAV), Geographic Information Systems (GIS), Surveillance, Sousveillance. Networks of collected information, over land and in the sky. Now consider the “consumer” side of tech: mapping programs, location-aware pocket tech, public-sourced media databases, and the apps and algorithms by which we navigate these tools. We already study the world the way a drone sees it: from above, with a dozen unblinking eyes, recording everything with the cold indecision of algorithmic commands honed over time, affecting nothing—except, perhaps, a single, momentary touch, the momentary awareness and synchronicity of a piece of information discovered at precisely the right time. An arc connecting two points like the kiss from an air-to-surface missile. Our technological capacity for watching, recording, collecting, and archiving has never been wider, and has never been more automated. The way we look at the world—our basic ethnographic approach—is mimicking the technology of the drone.

Go read the whole thing. Go on.


H+ trailer: a post-McLuhanist reading

Paul Raven @ 25-07-2011

So, this has been doing the rounds since its release at SDCC (which – given by what I’ve seen of it from blogs, Twitter and elsewhere – is less a convention and more some sort of fundamental rupture of reality that lets a million weird facets of pop culture manifest in the material world for a weekend); my first spot of it was at SF Signal, so they get the hat-tip. It’s the trailer for a forthcoming web-native series called H+

And here’s the blurb for those of you who can’t or won’t watch videos:

H+: The Digital Series takes viewers on a journey into an apocalyptic future where technology has begun to spiral out of control…a future where 33% of the world’s population has retired its cell phones and laptops in favor of a stunning new device – an implanted computer system called H+.

This tiny tool allows the user’s own mind and nervous system to be connected to the Internet 24 hours a day. But something else is coming… something dark and vicious… and within seconds, billions of people will be dead… opening the door to radical changes in the political and social landscape of the planet — prompting survivors to make sense of what went wrong.

Hmmm. So, what can we take from this? First off, “H+” or human augmentation as a cultural meme is strong enough on the geek fringes that someone thinks it’s a marketable theme for popular drama; this in itself is a very interesting development from the perspective of someone who chronicles and observes the transhumanist movement(s), because it’s a sign that traditionally science fictional or cyberpunkish ideas are being presented as both plausible and imminent*. Meme’s gonna go mainstream, yo.

Secondly, and less surprisingly, the underlying premise appears to be The Hubris Of Technology Will All But Annihilate Our Species, with a sideserving/undercurrent of Moral Panic. Handwringing over the potentially corrosive-to-civilisation properties of social media is common currency (as regular readers will be only too aware already), which means the soil is well-tilled for the seed of Singer’s series; it’s a contemporary twist on the age-old apocalypse riff, and that never gets old. Too early to tell whether the Hairshirt Back-To-The-Earth philosophy is going to be used as solution paradigm, but I’d be willing to put money on it making a significant showing. This is disappointing, but inevitable; as Kyle Munkittrick points out in his brief overview of the new Captain America movie, comics and Hollywood default to the portrayal of human augmentation as either an accident born of scientific hubris or the tainted product of a Frankensteinian corporation:

In what seems like every other superhero origin story, powers are acquired through scientific hubris. Be it the unintended consequences of splitting the atom, tinkering with genetics, or trying to access some heretofore unknown dimension, comic book heroes invariably arise by accident.

[…]

Normally, those who seek superpowers are unworthy because they believe they deserve to be better than others, thus, the experiments go wrong.

Yeah, that’s about right. And the choice of series title is very fortuitous; the avalanche of early responses drawing analogies to Google+ has probably already started on the basis of that trailer alone, which is going to annoy me just as much as Googlephobia does. I’ve been rereading Marshall McLuhan lately (in part so I could write a piece for his 100th birthday at Wired UK), and was struck by how calmly and persistently he insisted that making moral judgements of technologies was futile; indeed, he took the position that by spending less effort on judging our technologies, we might clear the moral fog that exists around our actual lives. In McLuhan’s thought, media are extensions of ourselves into time and space; it seems to me that the biggest problem they cause isn’t a moral degradation of humanity, but the provision of a convenient proxy to blame our human problems on: it woz the intertubes wot dun it.

There is an inevitability to the technological moral panic as popular narrative, though, and that’s underlined by its admirable persistence over time – as TechDirt‘s Mike Masnick reminds us, they’re at least as old as Gutenburg’s printing press (and we’re still here, as yet untoppled by our technological revolutions). Masnick also links to a WSJ blog piece that bounces off the research of one Genevieve Bell, director of Intel Corporation’s Interaction and Experience Research, who reiterates the persistence of the technological moral panic over time, and points out that it tends to locate itself in the bodies of women and children:

There was, she says, an initial pushback about electrifying homes in the U.S.: “If you electrify homes you will make women and children and vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them. So electricity is going to make women vulnerable. Oh and children will be visible too and it will be predators, who seem to be lurking everywhere, who will attack.

“There was some wonderful stuff about [railway trains] too in the U.S., that women’s bodies were not designed to go at 50 miles an hour. Our uteruses would fly out of our bodies as they were accelerated to that speed.”

She has a sort of work-in-progress theory to work out which technologies will trigger panic, and which will not.

  • It has to change your relationship to time.
  • It has to change your relationship to space.
  • It has to change your relationship to other people.

And, says Ms. Bell, it has to hit all three, or at least have the potential to hit them.

Interesting stuff, including a riff on comedy as a feedback loop in culture that enables us to control and mitigate the boundaries of what is acceptable with a new technology or medium. But as Bell points out, the march of technological change won’t wait for us to catch up with it; this state of technological angst has persisted for centuries, and will likely persist for as long as we remain a technologised species. Which means the doomsayers (and doomsayer media like H+) ain’t going anywhere… but going on past form, I’m going to assume we’ll find a way to ride it out and roll with the punches.

And just in case you were expecting a more standard blogger response to a television series trailer: yeah, I’ll probably watch H+, at least for long enough to see if it’s a good story well told; it looks like it might well be, regardless of the source of the narrative hook.

What about you?

[ * Which isn’t to say that the plot device in H+ will necessarily be scientifically plausible as it gets presented. Indeed, I rather suspect there’ll be some Unified Quantum Handwave Theory and/or Unobtainium involved… but the portrayal of social media as an internalised technology in the human body within a contemporary fictional milieu? That’s something I’ve not seen anywhere other than text media (books, stories, comics) thus far. ]


Next Page »