Paul Raven @ 12-03-2013

I do keep saying that futurism isn’t about making predictions, don’t I? Well, that’s because I really believe it. Prediction — in the sense of declaring with great certainty that [x] will come to happen — is a waste of time, because you have no way of accurately determining whether or not the prediction will be validated until the moment at which it is validated (or not). Stick to gambling on horses or stock prices, if gambling’s your thing.

I’m increasingly starting to think of futurism — or at least the sort of futurism I’m interested in doing — as being first and foremost about looking at consequences. This is an extension of the standard technological forecasting methodology, which tends to draw a temporal line through recent, current and projected technological developments in order to conclude that — at some point, however loosely defined — there will be a marketable technology that achieves a (seemingly) desirable goal.

Thing is, the desirable goal isn’t the end of the story. On the contrary, it’s only the beginning.

Example: driverless cars! Think how wonderful the driverless car revolution will be: you’ll be able to read a book or eat your breakfast during your commute! No more traffic jams! It’ll totally revolutionise personal transportation!

See the problem? This sort of thinking makes one exciting extrapolation against a freeze-framed status quo, and then extols the revolutionary change thus achieved. And much as there are days when I wish with all my heart that the world only changed in one measurable and discrete way at a time, it just ain’t so.

I can’t take credit for this particular insight, at least not independently; the redoubtable (and, even by my standards, prolix) Dale Carrico did a great job of shredding driverless car boosterism from his gadfly pulpit in the draughty towers of the White World Future Society. The critical point is this: making cars driverless doesn’t actually solve any of the biggest pressures on the private vehicle sector at all; it just ameliorates (or promises to ameliorate) some of the more unpleasant social side-effects attendant on the inescapable necessity of using what was once extolled as a technology that would improve lives by reducing journey times. (Oh, the irony.)

Making your car driverless doesn’t remove or reduce your need to be sat in the damned thing for hours twice a day; it won’t make your tanks of gas any cheaper or less environmentally damaging; it won’t roll back decades of suburban sprawl and expensive freeway infrastructure; and it’ll be a long time before the technology is cheap enough to make an impact on ordinary people, ie. those who would benefit most from reduced costs and more free time. By the time they’re widespread (if they ever are), the steady increase in the number of vehicles on the road will have countered any significant change in traffic loading; furthermore, those changes will be held back by the necessity of sharing the road system with manual ‘legacy’ vehicles.

The driverless car is not a revolution in personal transportation. It is merely a reinvention of the wheel, an iterative development — and a way of selling more new cars. Driverless cars may well change the world — but not for you, or at least not for your benefit.

This is what I mean about consequences; this is where futurism needs to make a point of bringing people — real, ordinary people — into the frame where the Brand New Shiny is being considered.

If you go and look at Carrico’s burner linked above, you’ll see a comment from Yours Truly where I did exactly that — shifting the predicted “disruption”* away from the average (and increasingly mythical) consumer and relocating it in those realms where big budgets and and slim margins make the cost of investing early look tempting. Driverless cars will only be available as commercial products to the super-rich, at least at first; driverless technology, however, will fit just as well into the trucks of the long-distance haulage industry, who have a whole laundry list of reasons to jump all over it at the soonest possibility: as fuel costs continue to rise, the prospect of a fleet of truck drivers who a) only require a one-off upfront payment at hiring time [ie. installation outlay cost], b) don’t need sleep, biobreaks or union representation and c) can drive around the clock with no drop in alertness is going to give haulage companies the biggest boner the poor bastards have had in years.

And hey, would you look at this?

Via Fast Company; OK, so these road trains still require one human driver in the lead cab, but I’ll bet my shoes and socks that’s more to do with allaying legislative (not to mention public) fears about the technology failing than a genuine necessity.

Ella Saitta once said to me that “the internet eviscerates everything it comes into contact with, and then turns it into something more like the internet”. The internet is all about cutting the need for human activity out of any commercial transaction, and about minimizing the length of supply chains.

If you think driverless technology is going to make your life better in the near future, you’re either a haulage company owner, or not paying attention.

* — The mutating semantics of the word “disruption” in the context of the tech-start-up scene is more than a little worrying to someone who spends a lot of time thinking about how language gets used; disruption is increasingly seen as a positive, a desirable thing, an opportunity to make a profit by eviscerating an existing industry that can’t compete with your new way of doing things. Which not only ties in with Ella’s observation, but also allows an insight into how some tech CEOs think: that collapsing a market is acceptable if you can then seize it wholesale.

Sounds a bit like US foreign policy during the Noughties, no?

Science fiction and science, part II: smashing the crystal ball

Paul Raven @ 11-02-2013

So, last week saw me take the train down to London in order to give a presentation on science fiction narratives as strategic planning tools to the Strategic Special Interest Group of the British Academy of Management.

(That’s neither a topic or audience I’d have ever expected to address publicly, had you asked me eighteen months ago.)

It was an interesting day out; it’s always good to meet people from a sector of the world where you’ve never really trodden, and to find out how they look at things. It’s also nice to be able to talk to them on topics of great personal interest, and to exchange ideas. I think it went fairly well; some of the attendees had very complimentary things to say about my presentation, and given how nervous I was about giving it, I’m going to count that as a net victory.

Not everyone was satisfied, however. Also on the roster of speakers was veteran UK fan and fiction writer Geoff Nelder, who explained how he came to write his story “Auditory Crescendo”, a tech extrapolation piece in the classic sf mode based upon his own experiences with his hearing aids. His recounting of the day’s events takes me to task for the heinous sin of claiming science fiction cannot predict the future, though he has since suggested I may want to respond to his criticisms and clarify my standpoint.

And indeed I do – not only in response to his own criticisms, which are perfectly reasonable, albeit petulantly framed (I must have “thought it would be cool” to discredit sf’s predictive mojo, apparently, rather than, I dunno, actually getting up there and telling people what I sincerely believe), but because this is an issue that I increasingly feel lies at the heart of the imaginative/qualitative approach to foresight and futurism, and I think that lancing this particular boil (or at least stabbing fretfully at the buboe with a safety pin) might be a beneficial public exercise. As always, brickbats and other projectiles from the peanut gallery are very much encouraged, but (also as always), I’d ask you to please play nice.


To be fair, part of Mr Nelder’s confusion may be the result of me trying to pack a very large argument into a comparatively small space in front of an audience to whom it was merely a qualifying sidenote to the main event. Mr Nelder later quotes my assertion that science fiction narratives can be seen as sandboxes, as dev environments for ideas, and I’m glad he can see that value; that was the core point I wanted to make, after all. I remain politely baffled, however, that he and others are unable to see how easily that value eclipses the false promises of prolepsis, so I’m going to have a stab at expanding my position here.

The thrust of my argument was not that science fiction never appears to make predictions, but that a) science fiction’s ability to make predictions is vastly overestimated by its practitioners, boosters and fans, that b) sf’s predictions look a lot less like predictions when one examines the real-world roll-out and compares it to the supposed fictional blueprint, and that c) predictions are effectively useless, especially in the context of a strategic planning conference, because they can only be verified by the emergence of the thing they predict, by which time their supposed prolepsis is a moot point.

To unpack that a little, let’s take Mr Nelder’s position – that science fiction can indeed predict technologies and/or phenomena which have yet to exist – as a given, and ask a simple question by way of response: “so what?”

It is certainly possible to go through a list of things which appeared in the pages of sf mags or books before appearing in reality; depending on your criteria, I dare say you could amass quite a number of them, though that also applies to the collection of counterexamples. Semantically speaking, this is a sort of prediction, which Oxford Dictionaries define as “say[ing] or estimat[ing] that (a specified thing) will happen in the future or will be a consequence of something”.

The point I was trying to make during my presentation, however, is that these predictions are in no way reliable. One could argue the numbers endlessly depending on the criteria used, but I feel totally safe in saying that sf has made plenty of failed predictions alongside its successes, and that – much like any extended exercise in the statistics of chance – it probably averages out to a 50-50 right-wrong split over a legitimate sample of a size worth considering. But even assuming a more generous split in favour of the proleptic, the more serious problem still pertains: namely that the success of a prediction can only be determined at the moment when its utility as a prediction has expired.

Let’s unpack another level and look at different classes of prediction, of which I would suggest there are basically two. The first is the banal prediction, wherein I make a claim which, while theoretically capable of being refuted by a statistically unlikely turn of events, is already considered sufficiently certain that predicting it is pointless. I can predict the sun will come up tomorrow morning, but I’d be an idiot to expect a cookie and a glass of milk for being proved right, and no one’s going to make their fortune off the back of my soothsaying. (If you want to send cookies anyway, though, be my guest. I like cookies.)

Science fiction has made many banal predictions, many of which have indeed come to pass. The value science fiction adds to general discourse by making such predictions – if any – is to be found in its exploration of their potential consequences. To use an example, it’s pretty facile to say “hey, if trends in mortality and healthcare continue, there’s gonna be a lot more people on the planet!”, but there’s something far more useful in saying “hey, if trends in mortality and healthcare continue, and there’s a lot more people on the planet, what might we end up eating?”

The second class of prediction is the prediction of potential consequence: the prediction that, if proven right, could radically transform the fortunes and fates of one or many people. By definition, these predictions are not easily made; if they were easily made, they would be of no consequence. They are, essentially, guesses – educated and/or informed to a greater or lesser degree, perhaps, but still guesses, imaginings, not pages from a Delorean’d sports almanac. Sure, some of them end up being validated by the events that follow their making. Some of them don’t. Again, we could argue the toss on the numbers either side of that split until the heat-death of the universe, and it would be a sideshow irrelevancy for one very important reason: no one knows in advance whether or not a prediction of potential consequence will come true or not. Validation can only occur at the moment when the prediction ceases to possess any utility beyond being a conversation point.

Or, to put it another way: science fiction is about as good at making informed predictions about the future as any card-sharp. You can argue that sf makes predictions all the time, but unless you’ve got a pretty good rubric for working out a) which predictions are predictions of potential consequence, and b) which of those predictions of consequence will come true, then these “predictions” are worthless to anyone other than a gambler (or a hedge-fund investor, which is essentially the same animal in a far more expensive and tasteful suit).

Science fiction’s supposed predictive capabilities are absolutely useless to anyone subject to the normal causal structure of the universe, which is, um, everyone. OK, you can go through the sf canon and pick out prediction after uncanny prediction; people have made a very successful industry out of doing exactly the same thing with the prophecies of Nostradamus. Even a stopped clock tells the right time twice a day, especially if you choose the right moment to draw everyone’s attention to it.

But again, let’s concede Mr Nelder’s point, and reiterate my question: science fiction does sometimes predict the future. So what? What use is that knowledge to anyone other than a gambler? Even the gambler would shrug it off, I suspect; if science fiction had any sort of statistical history of making better predictions about the future than any other domain of human endeavour, Wall Street and the Square Mile would have long since quanted the crap out of it. Science fiction may predict the future, but its predictions are functionally useless. They express possibilities, and nothing more.

My second point is one that I dealt with during my presentation, namely that most of what we’re told were sf’s most successful predictions turn out to be anything but. I’ll concede that this was a slightly straw-mannish argument on my part, albeit one furnished with endless regiments of ready-made straw soldiery practically begging to be wrestled to the ground, but the point I was making was meant to tie back into my grand theme, which was the inescapable subjectivity of narrative. Mr Nelder points out that Arthur C Clarke didn’t invent the geostationary satellite out of thin air, but did so in the context of his day-job as a scientist, and by building on the work of other researchers before him; this is demonstrably true. But my point as made stands very clearly: a quick google of the relevant search terms provides countless articles, some from reputable establishments or organs, (re)making the (false) claim that ACC “invented” the geostationary satellite. If anything, Mr Nelder’s revealing of the true source of the idea actually serves to support my point, not knock it back; the geostationary satellite is demonstrably something that is widely and repeatedly claimed to have “been invented” or “predicted” by science fiction, when it very clearly wasn’t.

And as such I maintain it was a suitable example, because the point I was making was that the core of a “prediction” may end up manifesting in a context which substantially changes its function, meaning or import. Clarke’s basic conception of geostationary satellites was sound, and did indeed inform the development of satellite telecomms, but he conceived them as manned space stations; writing in 1945, Clarke assumed, as many of his contemporaries would have done, that space travel would soon be as trivial and affordable as air travel. As such, the “prediction” bears little relation to its realization beyond the basic conceptual level, and the realization of the idea was only made possible by adjusting it considerably to fit the real-world context in which it was eventually to be deployed.

Interestingly, one of Mr Nader’s counterexamples also does a good job of undermining his position further, namely the “prediction” of robots in Fritz Lang’s Metropolis. For a start, Lang’s gorgeous and groundbreaking movie was not the original text to coin the term; that honour falls to Karel Čapek’s play R.U.R. Furthermore, the programmable worker-automaton is a trope far, far older than either, and can be found in the mythologies of many earlier cultures. (The powerful have always dreamed of a working class who would never complain about work or slope off for a lunch-break, after all.) Can it still be a “prediction” if you’ve actually just updated a very old idea to fit your contemporary sociopolitical context? Is it still a prediction if your prediction is quite obviously and openly a metaphor for a social or political change mediated by technology, in this case the dehumanisation of labour?

(Although, in a way, you could say that Čapek and Lang got a lot closer to true prediction with R.U.R. and Metropolis than many other supposed sf “predictions”; their robots were a metaphor for the alienation and exploitation of the working class, and if you look at the panicked discussions around the economics of manufacture and automation in the news at the moment, you can see that they successfully went far beyond the simple claim that “one day machines will do all the work for us” by exploring the impact and implications of such a change on human society; it is the consequences of that change that they explore, not its likelihood. As I said in my presentation, an inventor or engineer is interested in what a technology does and how it does it; an artist is interested in what it means. It is the exploration of meaning and human impact – so amply demonstrated in Mr Nelder’s own story presented on the day, in fact – that science fiction does well, perhaps even uniquely well in certain domains. The prediction stuff? It’s a crap-shoot, and not even something unique to sf; any two-bit tech-pundit with their own blog can do it, and it’s no more or less effective.

And as I also said in my presentation (which may well be the bit that irked Mr Nelder so badly), and I quote verbatim: “anyone who claims they can reliably predict the future is a huckster with something to sell you, even if their product is only themselves”. I illustrated it with the following image.

The immortal Kurzweil

I stand by that statement absolutely.

So, there it is: if you really want to argue that sf can predict the future, I’ll concede your point, but I’d counterargue that the more time you spend stamping your foot and saying that “sf can so predict the future, just lookee here at these examples”, the more time you spend making sf look like a carney-booth thrillshow with massively overblown notions of its own purpose and utility. If we want people to take sf seriously for the useful things that it can demonstrably do – the qualitative and subjective exploration of possibilities and consequences, for instance – then we need to stop rattling on about the power of prediction as if it were something that could be harnessed in any rigorous and useful way whatsoever.

Which is why, when given the chance to talk to business strategists about what use narrative might be in their work, I started with the most important example of what use it isn’t, because I’m tired of being lumped in with shiny-suited consultants and SilVal Singularitarian woo-pedlars, the foremost and loudest proponents of the sf-as-prophecy meme.

Someone had to shoot the elephant in the room, and I fully intend to keep firing until the bloody thing dies.


My thanks to the British Academy of Management for having me along and giving me a little soapbox time, to Dr. Gary Graham for organising the whole shindig, and to all the other participants, Mr Nelder not least among them; it’s by having my ideas challenged that I get the chance to improve them.

Out of Destruction, Transformation?

Brenda Cooper @ 11-01-2012

Most of my recent columns have been about change, from climate change to twitter. Well, this is a start-of-the-year post, and it seems appropriate to take on change in a big way as the year changes. Continue reading “Out of Destruction, Transformation?”

The future of Futurismic

Paul Raven @ 16-08-2011

I’ve been thinking about the future.

Time forms a frame for our narratives about ourselves, a scale for organising coherence out of a formless flow. Thinking in terms of months, years, decades is a convenience that I’ve come to suspect actually keeps us from understanding the true causality of things until we get a significant distance from them and don the Magic AR Glasses of Hindsight +2. That observation isn’t hugely germane to this post, I suppose, but it acts as a qualifier for the following statement:

This has been an eventful year so far, on both personal and global levels, and shows little sign of becoming less so.

You don’t need reminding of the global stuff, I’m sure, but the personal stuff has some bearing on the running of this here website.

First things first, though: Futurismic will continue. It’s too much a part of my life and thinking process to give up easily, for one thing, and furthermore I want to keep running work by my columnists. I even intend to reboot it as a fiction venue once money and time allow.

Money and time, of course, are always an issue. Money has been tight for a while, hence the fiction closedown at the start of this year; this has a lot to do with me having exchanged a steady income for the time to do the work I wanted to do (much of which was writing at Futurismic, ironically enough). But I’m now rapidly approaching a phase where the opposite situation may pertain. Some of you may already be aware that I’ve been accepted onto a Masters degree in Creative Writing at Middlesex University starting this autumn, which I’m very chuffed about indeed. But if I’m going to do it, I’ve got to do it right first time and commit myself to it, so I’m going to have to shift my writing priorities strongly toward fiction in the coming year.

Furthermore, I’m in the process of hunting down a ‘proper’ part-time job to support me financially during my studies, too; the erratic income of my freelance work is not conducive to the state of not-worrying-about-where-the-next-meal-is-coming-from that I find encourages me to write good material. Depending on what sort of work I get, there may be more or less time available to me for noodling about the future right here, though I have to assume the most likely scenario will include less time.

But like I said, I can’t just give this stuff up; not only is it a source of great intellectual pleasure, but current events suggest that we need to be thinking even more clearly about the future than ever before – not predicting, but probing, groping ahead through the temporal fog, trying to find a safe way through the existential minefield. How much I can contribute that will be of genuine use to the global discourse is for others to determine, but I feel the need to contribute nonetheless.

All of which is a long way of saying that I’m going to have to start approaching my writing here in a more efficient and effective way. It’s time to stop posting every day for the sake of posting, and to take the time to work on fewer better articles (as well as trying to place said articles at other venues); to only post when there’s something that needs to be discussed, and to discuss it properly

It’s time to pay less attention to the Shiny Gimcrack Future and more attention to the Grim Meathook Future; the future will be full of gadgets and weird stuff, for certain, but they’re a sideshow or sub-plot to the big stuff: politics and economics; the contrapuntal narratives of science and technology; social shifts, network culture and the cultural Zeitgeist. All stuff I already talk about, sure, but I think I need to do more than point at interesting stuff and say “hey, look – interesting stuff!” if I’m to actually add any value to the discourse. The internet’s full of folk flapping their lips, and I worry that I’ve spent too long talking loud but saying little; focussing on quality rather than frequency will, I hope, go some way to amending that.

Oddly enough, this is a conscious counter-response to a deep instinctive flinching from the future; as both a writer of stories and someone with a more general curiosity about the path ahead, it feels like it’s getting harder and harder to look more than a few years ahead with even the slightest degree of clarity, let alone hope, and the temptation is to retreat into a wilful ignorance and refusal to think about anything other than myself.

And everything’s interlinked: the broken economies of the former First World winding down to be overtaken by the BRICs and others; food shortages and price hikes; the mutation and metastasis of the post-national corporation and the continuing slump of the nation-state as unit of power in realpolitik, complicated by heel-dragging refusals to acknowledge the increasingly global nature of most of our civilisational problems; even the youth of America, once that most optimistic of nations, are now resigned to their future as the inheritors of the comedown and cost of imperial hubris… and if you managed to read the riots here in the UK, in Greece and across the Arab world as anything else other than a seismic rumble of big turbulence coming down the pipe, then you’re either possessed of an enviable yet largely unfounded optimism, or completely naive.

And the more I think about it, the more I think utopianist future-hucksters like Ray Kurzweil are part of the problem; the more I feel that Singularitarianism (much like some other emerging cults of the atemporal and altermodern End Times) is a refuge for privileged intellectuals who can’t face the future without believing they get some sort of personal get-out-of-Apocalypse-free card; the more I think that science fiction and other speculative forms of communication (design fiction, essays, mixed media, whatever) have great potential to help us understand where we’re going, but that the potential is wasted by that same desperate search for a personal escape hatch with the phrase “I’m all right, Jack” stencilled on it by some notoriously anonymous marginal celebrity street artist…

And so it goes. Futurismic has always been about peering ahead in various forms, but it’s time to look in smarter ways, and think more carefully about what we see.

I hope you’ll stick around for the journey. Some of it’s gonna be rough, some of it’s gonna be glorious… but it’ll all be made more bearable by having intelligent company along the way. Talking to you people for all these years has taught me a great deal, but I reckon you’ve probably got more to teach me yet.

Thanks for reading.

Wicked Problems and ends to limitless [x]

Paul Raven @ 02-08-2011

That Steelweaver post on Reality As A Failed State I mentioned a few days back really did the rounds. So I’m going to link to Karl Schroeder at Charlie Stross’s blog once again, and without any sense of shame – he’s been quiet for ages, but he’s spooling out a year’s worth of good shizzle over the space of a few weeks at the moment, and I think he’s a voice worth paying attention to.

Here he is talking about the “metaproblems” that Steelweaver mentioned, which have not only been known and named (as “wicked problems” for some time, but are already a subject of intense study… which is a good thing, too.

It is not the case that wicked problems are simply problems that have been incompletely analyzed; there really is no ‘right’ formulation and no ‘right’ answer. These are problems that cannot be engineered. The anger of many of my acquaintances seems to stem from the erroneous perception that they could be solved this way, if only those damned republicans/democrats/liberals/conservatives/tree-huggers/industrialists/true believers/denialists didn’t keep muddying the waters. Because many people aren’t aware that there are wicked problems, they experience the failure to solve major complex world issues as the failure of some particular group to understand ‘the real situation.’ But they’re not going to do that, and granted that they won’t, the solutions you work on have to incorporate their points-of-view as well as your own, or they’re non-starters. This, of course, is mind-bogglingly difficult.

Our most important problems are wicked problems. Luckily, social scientists have been studying this sort of mess since, well, since 1970. Techniques exist that will allow moderately-sized groups with widely divergent agendas and points of view to work together to solve highly complex problems. (The U.S. Congress apparently doesn’t use them.) Structured Dialogic Design is one such methodology. Scaling SDD sessions to groups larger than 50 to 70 people at a time has proven difficult–but the fact that it and similar methods exist at all should give us hope.

Here are a few wicked problems I think are exemplary. I touched on one of them yesterday, in fact, namely the roboticisation curve in manufacturing; far from liberating the toiling masses in some utopian fusion of Marx and capitalism, it might well increase the polarisation and widen the gap between the poor masses and the super-rich elites, a process that Global Dashboard‘s Alex Evans refers to as “jobless growth”::

In some developed economies (and especially the US), research suggests that job opportunities are increasingly being polarised into high and low skill jobs, while middle class jobs are disappearing due to “automation of routine work and, to a smaller extent, the international integration of labour markets through trade and, more recently, offshoring”. Meanwhile, data also show that while more women are entering the global labour force, the ‘gender gap’ on income and quality of work is widening between women and men. These trends raise a number of critical uncertainties for employment and development to 2020.

If automation of routine work genuinely is a more significant factor in developed economy job polarization than international trade or offshoring, then the implication is that developing economies may increasingly also fall prey to job polarisation as new technologies emerge and become competitive with human labour between now and 2020. Chinese manufacturing and Indian service industry jobs could increasingly be replaced by technology, for example, and find their existing rates of inequality exacerbated still  further.

And here’s a serendipitous look at the economics of a world where replicators and 3d printing become cheap enough to be ubiquitous [via SlashDot]:

Prices for 3D printers are tumbling. Even simple systems often cost tens of thousands of dollars a decade ago. Now, 3D printers for hobbyists can be had for a fraction of that: MakerBot Industries offers a fully assembled Thing-O-Matic printer for just $2,500, and kits for building RepRap printers have sold for $500. The devices could be on track for mass-production as home appliances within just a few years.

So, will we all soon be living like Arabian Nights sultans with a 3D printing genie ready to grant our every wish? Could economies as we know them even survive in such a world, where the theoretically infinite supply of any good should drive its value toward zero?

The precise limitations of replicator technology will determine where scarcity and foundations for value will remain. 3D printers need processed materials as inputs. Those materials and all the labor required to mine, grow, synthesize or process them into existence will still be needed, along with the transportation costs to bring them to the printers. The energy to run a replicator might be another limiting factor, as would be time (would you spend three days replicating a toaster if you could have one delivered to your home in an hour)? Replicators will also need inputs to tell them how to make specific objects, so the programming and design efforts will still have value.


Perhaps the most important limitation on the replicator economy may competition from good old mass production. Custom-tailored suits may be objectively better than off-the-rack outfits, but people find that the latter are usually the more sensible, affordable purchase. Mass production—especially by factories adopting nimble 3D-printing technologies—can still provide marvelous economies of scale. So even when it is theoretically possible for anyone to fabricate anything, people might still choose to restrict their replicating to certain goods—and to continue making their tea with a store-bought teabag.

The unspoken underpinning of that last paragraph (as hinted by my bolding) is the important bit: the economies of scale of fabbing will see more and more human labour replaced by machines – machines that don’t need holidays, or even sleep; machines that don’t get tired and make a higher percentage of dud iterations as a result; machines that, before too long, will be able to make other machines as required. The attraction of such a system to Big Capital (and small capital, too) is pretty obvious.

And all in the name of chasing perpetual infinite growth, a central assumption of most modern economic thought (or at least the stuff I’ve encountered so far) that relies on a lot of other assumptions… like, say, the assumption that we’ll always be able to either produce more energy, or use the amount we have available more efficiently [via MetaFilter]:

It seems clear that we could, in principle, rely on efficiency alone to allow continued economic growth even given a no-growth raw energy future (as is inevitable). The idea is simple. Each year, efficiency improvements allow us to drive further, light more homes, manufacture more goods than the year before—all on a fixed energy income. Fortunately, market forces favor greater efficiency, so that we have enjoyed the fruits of a constant drum-beat toward higher efficiency over time. To the extent that we could continue this trick forever, we could maintain economic growth indefinitely, and all the institutions that are built around it: investment, loans, banks, etc.

But how many times can we pull a rabbit out of the efficiency hat? Barring perpetual motion machines (fantasy) and heat pumps (real; discussed below), we must always settle for an efficiency less than 100%. This puts a bound on how much gain we might expect to accomplish. For instance, if some device starts out at 50% efficiency, there is no way to squeeze more than a factor of two out of its performance.


Given that two-thirds of our energy resource is burned in heat engines, and that these cannot improve much more than a factor of two, more significant gains elsewhere are diminished in value. For instance, replacing the 10% of our energy budget spent on direct heat (e.g., in furnaces and hot water heaters) with heat pumps operating at their maximum theoretical efficiency effectively replaces a 10% expenditure with a 1% expenditure. A factor of ten sounds like a fantastic improvement, but the overall efficiency improvement in society is only 9%. Likewise with light bulb replacement: large gains in a small sector. We should still pursue these efficiency improvements with vigor, but we should not expect this gift to provide a form of unlimited growth.

On balance, the most we might expect to achieve is a factor of two net efficiency increase before theoretical limits and engineering realities clamp down. At the present 1% overall rate, this means we might expect to run out of gain this century.  Some might quibble about whether the factor of two is too pessimistic, and might prefer a factor of 3 or even 4 efficiency gain.  Such modifications may change the timescale of saturation, but not the ultimate result.

So it ain’t just Moore’s Law that could be running into a brick wall real soon. A whole lot of caltrops on the highway to the future, then… and we’re still arguing about how to bolt more governers and feedback loops onto fundamentally broken polticoeconomic systems. Wicked problems, indeed. It’s hard not to feel bleak as we look into the eye of this abyss, but Schroeder suggests there’s a way out:

Here’s my take on things: our biggest challenges are no longer technological. They are issues of communication, coordination, and cooperation. These are, for the most part, well-studied problems that are not wicked. The methodologies that solve them need to be scaled up from the small-group settings where they currently work well, and injected into the DNA of our society–or, at least, built into our default modes of using the internet. They then can be used to tackle the wicked problems.

What we need, in other words, is a Facebook for collaborative decision-making: an app built to compensate for the most egregious cognitive biases and behaviours that derail us when we get together to think in groups. Decision-support, stakeholder analysis, bias filtering, collaborative scratch-pads and, most importantly, mechanisms to extract commitments to action from those that use these tools. I have zero interest in yet another open-source copy of a commercial application, and zero interest in yet another Tetris game for Android. But a Wikipedia’s worth of work on this stuff could transform the world.

Digital direct democracy, in other words, with mechanisms built in to ameliorate the broken bits of our psychology. Oh, sure, you can scoff and say it’ll never work, but even a flimsy-looking boat starts looking like it’s worth a shot when the tired old paddle-steamer starts doing its Titanic impersonation in the middle of the swamp. What Schroeder (and many others) are suggesting is eminently possible; all we lack is the political will to build it.

And it’s increasingly plain that we’re not going to find that will in the bickering halls of the incumbent system; it’s only interested in maintaining its own existence for as long as possible, and damn the consequences.

Which is why we need to turn our backs on that system and build its replacement ourselves.

Next Page »