Tag Archives: predictions

Got 99 metaproblems (but a lack of aspirational futurism ain’t one)

Good grief, but the RSS mountain really piles up in 24 hours, doesn’t it?

Well, mine does, anyway… which means it’s probably high time I had a spring-clean in there to make it more manageable. As well as maybe, y’know, stopping the habit of adding more feeds to the damned aggregator all the time. There’s too much interesting stuff (or grim stuff, or grimly interesting stuff) going on in the world, y’see; the temptation to stay on top of it all and let it just flow through my head like some sort of Zeitgeist/sewer-outflow hybrid is horribly compelling. I am the gauzy mesh in your perpetual flow of present history, plucking out interesting lumps of… no, actually, let’s stop that metaphor right there.

Anyways, long story short: had a busy few days and have more busyness ahead, so minimal commentary from me today. Instead, an exhortation to go and read stuff written by other folk far smarter than I. We’ll start with the manageably short piece, which is another Karl Schroeder joint at Chateau Stross (or should that be Schloss Stross?) where he talks about the difference between foresight futurism and “predicting the future”, and a new aspirational direction for his near-future science fiction output that is reminiscent of Jetse de Vries’ Optimistic SF manifesto:

… I’m pretty tired of all those, “Dude, where’s my flying car!” digs. There’s always been a certain brand of futurist who’s obsessed with getting it right: with racking up successful predictions like some modern-day Nostradamus. I’m sure you know who I’m talking about; some futurists play the prediction game very well, but in the end it is a game, and closer to charlatanism than it is to science. There’s actually no method for seeing the future, and nobody’s predictions are more reliable than anybody else’s.

You know, I think we do know who he’s talking about…

And while we’re thinking about the future, it’s hard to avoid thinking about problems, for – as a species and a planet – we have rather a lot of them right now. So many, in fact, that you might even say that reality itself is a failed state:

So maybe what we have today are not problems, but meta-problems.

It is very useful to confirm our understanding with others, to meet with fellow humans – preferably face-to-face – strength flows from this.

However, disquiet remains – no pre-catastrophic change of course seems in any way likely. What we might call ‘Fabian’ environmentalism has failed.

Occasionally a scientist will be so overcome with horror that he will make a radical public pronouncement – like the drunken uncle at a wedding, he may well be saying what everyone knows to be true, pulling the skeletons out of the family closet for all to see, but, well, it just doesn’t do to say that sort of thing out loud at a formal function.

This is all a little bit strange.

We understand the problems. We also, pretty much, understand the solutions. But their real-world application is a whole unpickable, integrated clusterfuck.

I believe part of the meta-problem is this: people no longer inhabit a single reality.

Collectively, there is no longer a single cultural arena of dialogue.

And we need to construct one. Go read the rest for the full lowdown. I’d love to be able to name the writer as something other than “Steelweaver”, but as he’s using a Tumblr with no About page or anything*, I am largely unable to do so. If you can fill in that datagap for me, please get in touch or leave a note in the comments.

[ * Note to writers of serious and/or interesting stuff on the intetubes: this is rather frustrating, and Tumblr really isn’t the best platform for this sort of stuff. Basically it’s the post-naivete ironic MySpace, optimised for collecting hipster aphorisms and reposting “art” shots that tend to contain boobs.

Just sayin’. ]

Today’s Tomorrows, 2011 edition

Apologies to Brenda for re-using the title of her column, but it’s the start of the year… and despite most of us knowing that dates (and indeed time itself) are relative, we tend to take that as an opportunity to step ourselves out of the temporal flow for a few days and take a look both backward and forward. Of course, looking backward and forward (with a side-serving of sideways) is our daily bread here at Futurismic, but it’s nice to feel like the rest of the world’s playing along, you know? 🙂

So why not pop over to The Guardian, where a collection of clever folk make twenty predictions about the next 25 years? Some are no-brainers (“Rivals will take greater risks against the US” – that’s more of a trend than a prediction, really), some seem a little naively optimistic (“The popular revolt against bankers will become impossible to resist” – I’d love to see it happen but doubt we will, at least here in the UK), and some are reheated versions of classic cyberpunk transhumanism, suddenly made mundane and plausible in the face of unprecedented technological advancement (“We’ll be able to plug information streams directly into the cortex”).

They all mark what, to me, is one of the most interesting social shifts of the last year or two: namely the sudden widespread acceptance of speculative thinking in mainstream media. Sure, it’s always been there, but it seems more ubiquitous now. Strange how we had to wait until the future was all around us before we started thinking hard about what shape it would be, no?

Speaking of speculative thinking, the BBC got in on the game back in December, picking apart some old (and largely failed) predictions from the 70s and quizzing present-day “futurologists” (which I maintain is a horrible noun) about how they do their work. David Brin’s response suggests that I’ve at least got the basic methodology sussed out:

“The top method is simply to stay keenly attuned to trends in the laboratories and research centres around the world, taking note of even things that seem impractical or useless,” says Brin.

“You then ask yourself: ‘What if they found a way to do that thing ten thousand times as quickly/powerfully/well? What if someone weaponised it? Monopolised it? Or commercialised it, enabling millions of people to do this new thing, routinely? What would society look like, if everybody took this new thing for granted?'”

That’s pretty much the query-set that sits in my forebrain as I drink from the RSS firehose each morning… 🙂

And last but not least, it wouldn’t be early January without Chairman Bruce and Jon Lebkowsky taking the virtual podium at The WELL for their annual State Of The World discussion. Hell knows there’s plenty to talk about, right?

While Futurismic is no WELL (and I’m surely no Bruce Sterling, much to my own disappointment), I like the format they use there: like phone-in talk radio, but text based. So I’d like to take this opportunity to remind regular Futurismic heads that the contact page is always open – if you’ve seen something you think we should be talking about, or just have your own take on a story we’ve looked at already, then by all means drop me a line and let me know.

Shonky futurism: debunking Kurzweil

This one should set the transhumanist blogosphere alight for a week or so; IEEE Spectrum has an article that carefully picks apart the futurist predictions of Ray Kurzweil, prophet of the Technological Singularity. In summary: the best way to make successful predictions is to couch them vaguely enough that you can argue for their veracity after the point [via SlashDot].

Therein lie the frustrations of Kurzweil’s brand of tech punditry. On close examination, his clearest and most successful predictions often lack originality or profundity. And most of his predictions come with so many loopholes that they border on the unfalsifiable. Yet he continues to be taken seriously enough as an oracle of technology to command very impressive speaker fees at pricey conferences, to author best-selling books, and to have cofounded Singularity University, where executives and others are paying quite handsomely to learn how to plan for the not-too-distant day when those disappearing computers will make humans both obsolete and immortal.

I have to admit to having a soft spot for Kurzweil and his geek-Barnum schtick, but as time has gone by (and with thanks to the readership of this very blog, who are very good at making me question my assumptions and reassess my ideas) I’ve increasingly seen him as a shrewd businessman rather than a visionary prophet.

That said, I think there’s a social value in his popularisation of transhumanist tropes – it takes real charisma to sell ideas that speculative to folk enmired in the corporatist mindset, and I think he reaches audiences who are resistant to the sort of speculative thinking that informs good science fiction. And as to his exorbitant speaking fees, well, that’s the marketplace at work. Can’t blame the guy for taking the money if it’s available, can you? After all, those diet supplements probably cost a fair bit… 😉

Musicians as futurists

If you want to get a passable guess at what the future will look like, maybe you should skip the science fiction shelves and head to the music department instead; The Guardian‘s John Naughton points out that David Bowie made some prescient statements about the current state of the music industry just under a decade ago, and that the Grateful Dead had sussed out a post-scarcity business model for a touring band long before anyone had started bandying that term about in connection with digital media – the latter of which is a riff off an article in the Atlantic which I seem to remember hearing somewhere else in the last week or so, quite possibly at TechDirt.

Of course, the Dead’s “vision” has long been the butt of snark from musicians and critics alike – only now does their anachronistic tribe-first model look like anything more than a weird hangover from the 60s. I very doubt Bowie was the only person who foresaw the impeding self-immolation of the recording industry – in fact, one would assume that a career in the pertinent industry as long as Bowie’s would be a, and I’m surprised that any mention of music and futurism together doesn’t warrant some words on Brian Eno… but Naughton’s post is a healthy reminder that proleptic predictions are as much a function of hindsight as they are of foresight, if not more so.

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?