Tag Archives: polarity

A week in the unnecessary trenches of futurist philosophies

First things first: I should raise my hand in a mea culpa and admit that framing the recent spate of discussion about Singularitarianism as a “slap-fight” was to partake in exactly the sort of dumb tabloid reduction-to-spectacle that I vocally deplore when I see it elsewhere. There was an element of irony intended in my approach, but it wasn’t very successful, and does nothing to advance a genuinely interesting (if apparently insolvable) discussion. Whether the examples of cattiness on both sides of the fence can be attributed to my shit-stirring is an open question (and, based on previous iterations of the same debate, I’d be inclined to answer “no, or at least certainly not entirely”), but nonetheless: a certainty of cattiness is no reason to amplify or encourage it, especially not if you want to be taken seriously as a commentator on the topic at hand.

So, yeah: my bad, and I hope y’all will feel free to call me out if you catch me doing it again. (My particular apologies go to Charlie Stross because – contrary to my framing of such – his original post wasn’t intended to “start a fight” at all, but I’ve doubtless misrepresented other people’s positions as well, so consider this a blanket apology to all concerned.)

So, let’s get back to rounding up bits of this debate. The core discussion consisting of responses to Stross and counter-responses to such [see previous posts] seems to have burned out over the last seven days, which isn’t entirely surprising, as both sides are arguing from as-yet-unprovable philosophical positions on the future course of science and technology. (As I’ve said before, I suspect *any* discussion of the Technological Singularity or emergent GAI is inherently speculative, and will remain such unless/until either of them occur; that potentiality, as I understand it, informs a lot of the more serious Singularitarian thinking, which I might paraphrase as saying “we can’t say it’s impossible with absolute certainty, and given the disruptive potential of such an occurance, we’d do well to spare some thought to how we might prevent it pissing in our collective punchbowl”.)

The debate continues elsewhere, however. Via Tor.com, we find an ongoing disagreement between Google’s Director of Research Peter Norvig and arch-left-anarchist linguist Noam Chomsky over machine learning methodologies. As I understand it, Chomsky rejects any attempt to recreate a system without and attempt to understand why and how that system works the way it does, while Norvig – not entirely surprisingly, given his main place-of-employment – reckons that statistical analysis of sufficiently large quantities of data can produce the same results without the need for understanding why things happen that way. While not specifically a Singularitarian debate, there’s a qualitative similarity here: two diametrically opposed speculative philosophical positions on an as-yet unrealised scientific possibility.

Elsewhere, Jamais Cascio raises his periscope with a post that prompted my apology above. Acknowledging the polar ends of the futurist spectrum – Rejectionism (the belief that we’re dooming ourselves to destruction by our own technologies) and Posthumanism (the technoutopian assumption that technology will inevitably transform us into something better than what we already are) – he suggests that both outlooks are equally destructive, because they relieve us of the responsibility to steer the course of the future:

The Rejectionist and Posthumanist arguments are dangerous because they aren’t just dueling abstractions. They have increasing cultural weight, and are becoming more pervasive than ever. And while they superficially take opposite views on technology and change, they both lead to the same result: they tell us to give up.

By positing these changes as massive forces beyond our control, these arguments tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

[…]

Technology is part of who we are. What both critics and cheerleaders of technological evolution miss is something both subtle and important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, and we change it every day by changing our technology. And it is this, more than the demands for abandonment or the invocations of a secular nirvana, that will give us enormous challenges in the years to come.

I think Jamais is on to something here, and the unresolvable polarities of the debates we’ve been looking at underline his point. Here as in politics, the continuing entrenchment of opposing ideologies is creating a deadlock that prevents progress, and the framing of said deadlock as a fight is only bogging things down further. There’s a whole lot of conceptual and ideological space between these polar positions; perhaps we should be looking for our future in that no-man’s-land, before it turns into the intellectual equivalent of the Western Front circa 1918.