Tag Archives: singularitarianism

Singularity Summit 2011, New York City

Here’s a heads-up from Mike Anissimov in his capacity as publicity person for the Singularity Institute: said institution is putting on a two-day Singularity Summit in New York in the middle of next month, 15th and 16th of October.

The topical focus is on robots and A.I., in particular that headline-grabbing Jeopardy! victory by IBM’s Watson supercomputer earlier in the year. Ray Kurzweil will be the opening keynote speaker, and others on the roster of big-name boffins include Stephen Wolfram, David Brin, Peter Thiel, Eliezer Yudkowsky and Jaan Tallinn. A more complete breakdown and running order can be found over at H+ Magazine (check the link above).

So, yeah; it’s a bit of a sausage-fest of a line-up, isn’t it? My own chances of attending are at root precluded by my being located on entirely the wrong side of the Atlantic, but if you fancy a weekend with the Big Brainy Boys of Transhumanist Boffinry, tickets are available at the rather terrifying price of US$350 for single day access or a bargain US$585 for both days.

Yeah, I know. How many day’s worth of off-brand metabolic longevity diet supplements would that cover, I wonder? < /snark>

Technohaberdashery

Now this is just wonderful, even if it’s a clear response to the start of a long (but maybe not so slow) ramping down from the current consumer-driven innovation model of technology business:

The notion of a “haberdashery for technology” came from traditional haberdasheries which are (or, more often than not, were) filled with knitting needles, sewing machines, patterns, buttons, thread and examples of clothes, bags and quilts that you can make yourself. They tend to have shop assistants who are experts at their craft, as opposed to general salespeople, and they give you advice and host classes to learn new sewing skills.

Hirschmann explains: “Now replace all of that with LEDs, circuit boards, soldering irons and lots of lovely little drawers with resistors, capacitors and switches The store is immaculately organised and there are explanations of the bits and bobs near all of the components to help demystify what they do and how they might be useful. There are a selection of bespoke DIY kits for you to explore at home.

Operations like this are a heartening sign, but the ones that last the course will probably be a little less worthy and a lot more ramshackle, much more along the lines of a “bring yer thing and fix it yerself then pay me for the parts” sort of place, a free hackspace that both monetises and entices its meattraffic with the same supplementary offering.

This sort of high-functioning ‘adaptive reuse as business model’ thing is an inevitable necessity for a world with low incomes and limited resources, really… but it’s not a new thing, though: think back not too far to the days when you might have a door-to-door knife-sharpening guy come round the neighbourhood once a season, for instance. As much as we talk about our technologies as being tools, we don’t value them like a really good tool is valued, like a good knife would be sharpened regularly all throughout its long working life. We think of “tools” as being almost a commodity concept nowadays; a word like “power”, “bandwidth”, “leverage”. “Tools” is just our ease of access to Stuff That Does Things, it’s our ability to buy or rent or borrow what we need when we need it.

That ability will cease to pertain in the realm of physical meatspace tools very quickly. This means good tools – well made, well used, maintained and cared for, stored properly – will become valuable social capital in a post-growth economy: an opportunity to contribute rather than a lever for power. Also: the return of the freelance artisan and jack’ll-fix-it, available in both static/urban and nomadic/rural models. Every block or village will have a guy who sysops for local businesses, f’rinstance, and probably another dude who handles the hardware side of things; less glamorously (but equally essentially), you’ll have white-hat infrastructure hackers, people who can patch a local power grid, keep water and sewage systems running, repair or demolish problem architecture… and again, none of this is new. Indeed, it’s current in any major city with a sizeable favela population.

Your city may not have any favelas right now, of course. But it will.

Further weird signals form the nearest strange attractors: some guy hustles Mercedes into sponsoring his prosthetic hand [via MetaFilter]. That’s a novel in nine words, right there, and it’s not even a made-up story. Related: the guy who swapped out his glass eye for  a little digicam [via ModeledBehaviour]. These are just two of real on-the-ground transhumanism’s many, many faces; there will be more of them to come. The two greatest mistakes one can make about transhumanism are falling for the Kurzweilian corporate Singularity fantasy (which I increasingly suspect portrays only the parts of the future reserved for shareholders), or assuming that the ludicrousness of said Singularity fantasy invalidates or derails the existence of an observable and growing subculture. (Confession time: I’ve been guilty of both before now.)

To put it another way: we won’t be uploading our minds any time soon, but there’s more unexpected-consequences-of-being-cyborgs in the very near future of our species, without a doubt… because another of those new artisan careers will be the bodysculptor, the back-street surgeon, and they will not be short of work (even if most of it will be elective or cosmetic rather than… functional, shall we say.)

At this point someone is sure to be thinking “but to do that to yourself would be genuinely insane – like, actual pathology craziness!” You’re probably right, too. I think the problem with dismissing the more extreme examples of the transhumanist urge (no matter how shallowly understood it appears to be in each participant) as mental pathology is that doing so is a convenient way of avoiding the need to address the real problem: what’s causing that craziness, and how prevalent is it? The second question is probably the least important, because it’s the one that’ll answer itself very quickly. The answer to the first will be something already embedded deep enough in the body of our civilisation that its removal would kill or cripple us: it is technology itself, and the madness of kids trying to become the Terminator is the madness of a body trying to remake itself in an image more like the ones it dreams of.

It is the madness of being young in a mad world, and it will not be cured or engineered away.

Singularity linkage

A few more Singularitarian nuggets have drifted into my intertubes dragnet over the weekend. Having not had much chance to read and absorb, I’ll just throw ’em up for those of you who’ve not got distracted by the Shiny Of The Day (whatever that might be – I’m suffering from a case of Momentary Zeitgeist Disconnection here at the moment).

First up, Charlie Stross is back with a mention of “Federov’s Rapture”, a sort of proto-extropianism philosophy with its roots in Russian Orthodox Xtianity:

A devout Christian (of the Russian Orthodox variety), “Fedorov found the widespread lack of love among people appalling. He divided these non-loving relations into two kinds. One is alienation among people: ‘non-kindred relations of people among themselves.’ The other is isolation of the living from the dead: ‘nature’s non-kindred relation to men.'” … “A citizen, a comrade, or a team-member can be replaced by another. However a person loved, one’s kin, is irreplaceable. Moreover, memory of one’s dead kin is not the same as the real person. Pride in one’s forefathers is a vice, a form of egotism. On the other hand, love of one’s forefathers means sadness in their death, requiring the literal raising of the dead.”

Federov believed in a teleological explanation for evolution, that mankind was on the path to perfectibility: and that human mortality was the biggest sign of our imperfection. He argued that the struggle against death would give all humanity a common enemy — and a victory condition that could be established, in the shape of (a) achieving immortality for all, and (b) resurrecting the dead to share in that immortality. Quite obviously immortality and resurrection for all would lead to an overcrowded world, so Federov also advocated colonisation of the oceans and space: indeed, part of the holy mission would inevitably be to bring life (and immortal human life at that) to the entire cosmos.

I doubt that comparisons to religious eschatologies is going to be any better received than accusations of magical thinking, but hey. (As a brief sidebar, I was probably primed for my own interest in Singularitarianism by the redeployment of Teilhard de Chardin‘s Omega Point idea in Julian May’s Galactic Milieu series.)

And here’s another two from the admirably prolific Michael Anissimov. First up, The Illusion of Control in a Intelligence Amplification Singularity, which is a complex enough piece to make a simple summing-up into a futile exercise, so go read the whole thing – there’s some valuable thinking in there. Though the opening paragraph pretty much sums up my concerns about Singularitarianism:

From what I understand, we’re currently at a point in history where the importance of getting the Singularity right pretty much outweighs all other concerns, particularly because a negative Singularity is one of the existential threats which could wipe out all of humanity rather than “just” billions.

I can understand the risks; it’s the likelihood I remain to be convinced of. And given all the other serious global problems we’re facing right now, having the Singularity “outweigh all other concerns” strikes me as narrowly hyperopic at best. How’s about post-corporatist economics? Energy generation, distribution and storage? Sanitation? Resource logistics? Global suffrage and a truly democratic system of governance? Climate change? These all strike me as far more immediate and pressing threats to human survival. A hard-takeoff Singularity as posited here is an existential risk akin to a rogue asteroid strike: certainly not to be ignored, but the response needs to be proportional to the probability of it actually happening… and at the moment I think the asteroids are the more pressing concern, even for us folks lucky enough to have the economic and cognitive surplus to spend our time arguing about stuff on the intertubes.

Secondly, another riposte to Alex Knapp:

To be pithy, I would argue that humans suck at all kinds of thinking, and any systems that help us approach Bayesian optimality are extremely valuable because humans are so often wrong and overconfident in many problem domains. Our overconfidence in our own reasoning even when it explicitly violates the axioms of probability theory routinely reaches comic levels. In human thinking, 1 + 1 really can equal 3. Probabilities don’t add up to 100%. Events with base rates of ~0.00001%, like fatal airplane crashes, are treated as if their probabilities were thousands of times the actual value. Even the stupidest AIs have a tremendous amount to teach us.

The problem with humans is that we are programmed to violate Bayesian optimality routinely with half-assed heuristics that we inherited because they are “good enough” to keep us alive long enough to reproduce and avoid getting murdered by conspecifics. With AI, you can build a brain that is naturally Bayesian — it wouldn’t have to furrow its brow and try real hard to obey simple probability theory axioms.

Knapp himself crops up in the comments with a counter-response:

What I question is the scientific basis from which artificial general intelligence can be developed. More specifically, my primary criticism of AGI is that we don’t actually know how the mechanism of intelligence works within the human brain. Since we don’t know the underlying physical principles of generalized intelligence, the likelihood that we’ll be able to design an artificial one is pretty small. [This reminds me of the Norvig/Chomsky debate, with Knapp siding with Chomsky’s anti-black-box attitude. – PGR]

Now, if you want to argue that computers will get smart at things humans are bad at, and therefore be a complement to human intelligence, not only will I not disagree with you, I will politely point out that that’s what I’ve been arguing THE WHOLE TIME.

More to come, I expect. I really need to convince someone to let me write a big ol’ paid piece about this debate, so I can justify taking a week to read up on it all in detail…

A week in the unnecessary trenches of futurist philosophies

First things first: I should raise my hand in a mea culpa and admit that framing the recent spate of discussion about Singularitarianism as a “slap-fight” was to partake in exactly the sort of dumb tabloid reduction-to-spectacle that I vocally deplore when I see it elsewhere. There was an element of irony intended in my approach, but it wasn’t very successful, and does nothing to advance a genuinely interesting (if apparently insolvable) discussion. Whether the examples of cattiness on both sides of the fence can be attributed to my shit-stirring is an open question (and, based on previous iterations of the same debate, I’d be inclined to answer “no, or at least certainly not entirely”), but nonetheless: a certainty of cattiness is no reason to amplify or encourage it, especially not if you want to be taken seriously as a commentator on the topic at hand.

So, yeah: my bad, and I hope y’all will feel free to call me out if you catch me doing it again. (My particular apologies go to Charlie Stross because – contrary to my framing of such – his original post wasn’t intended to “start a fight” at all, but I’ve doubtless misrepresented other people’s positions as well, so consider this a blanket apology to all concerned.)

So, let’s get back to rounding up bits of this debate. The core discussion consisting of responses to Stross and counter-responses to such [see previous posts] seems to have burned out over the last seven days, which isn’t entirely surprising, as both sides are arguing from as-yet-unprovable philosophical positions on the future course of science and technology. (As I’ve said before, I suspect *any* discussion of the Technological Singularity or emergent GAI is inherently speculative, and will remain such unless/until either of them occur; that potentiality, as I understand it, informs a lot of the more serious Singularitarian thinking, which I might paraphrase as saying “we can’t say it’s impossible with absolute certainty, and given the disruptive potential of such an occurance, we’d do well to spare some thought to how we might prevent it pissing in our collective punchbowl”.)

The debate continues elsewhere, however. Via Tor.com, we find an ongoing disagreement between Google’s Director of Research Peter Norvig and arch-left-anarchist linguist Noam Chomsky over machine learning methodologies. As I understand it, Chomsky rejects any attempt to recreate a system without and attempt to understand why and how that system works the way it does, while Norvig – not entirely surprisingly, given his main place-of-employment – reckons that statistical analysis of sufficiently large quantities of data can produce the same results without the need for understanding why things happen that way. While not specifically a Singularitarian debate, there’s a qualitative similarity here: two diametrically opposed speculative philosophical positions on an as-yet unrealised scientific possibility.

Elsewhere, Jamais Cascio raises his periscope with a post that prompted my apology above. Acknowledging the polar ends of the futurist spectrum – Rejectionism (the belief that we’re dooming ourselves to destruction by our own technologies) and Posthumanism (the technoutopian assumption that technology will inevitably transform us into something better than what we already are) – he suggests that both outlooks are equally destructive, because they relieve us of the responsibility to steer the course of the future:

The Rejectionist and Posthumanist arguments are dangerous because they aren’t just dueling abstractions. They have increasing cultural weight, and are becoming more pervasive than ever. And while they superficially take opposite views on technology and change, they both lead to the same result: they tell us to give up.

By positing these changes as massive forces beyond our control, these arguments tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

[…]

Technology is part of who we are. What both critics and cheerleaders of technological evolution miss is something both subtle and important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, and we change it every day by changing our technology. And it is this, more than the demands for abandonment or the invocations of a secular nirvana, that will give us enormous challenges in the years to come.

I think Jamais is on to something here, and the unresolvable polarities of the debates we’ve been looking at underline his point. Here as in politics, the continuing entrenchment of opposing ideologies is creating a deadlock that prevents progress, and the framing of said deadlock as a fight is only bogging things down further. There’s a whole lot of conceptual and ideological space between these polar positions; perhaps we should be looking for our future in that no-man’s-land, before it turns into the intellectual equivalent of the Western Front circa 1918.

Singularity beef, day 5

Yup, it’s still rolling. Here’s the post-Stross posts that came in over the weekend:

Anyone else catch any goodies?

[ * Interestingly enough, Fukuyama himself has more recntly veered considerably away from the theories espoused in The End Of History… ]

[ ** For the record, I really admire Brin as a challenging thinker; I’d admire him even more if he spent less time reminding me of his past successes. ]