Tag Archives: responsibility

A week in the unnecessary trenches of futurist philosophies

First things first: I should raise my hand in a mea culpa and admit that framing the recent spate of discussion about Singularitarianism as a “slap-fight” was to partake in exactly the sort of dumb tabloid reduction-to-spectacle that I vocally deplore when I see it elsewhere. There was an element of irony intended in my approach, but it wasn’t very successful, and does nothing to advance a genuinely interesting (if apparently insolvable) discussion. Whether the examples of cattiness on both sides of the fence can be attributed to my shit-stirring is an open question (and, based on previous iterations of the same debate, I’d be inclined to answer “no, or at least certainly not entirely”), but nonetheless: a certainty of cattiness is no reason to amplify or encourage it, especially not if you want to be taken seriously as a commentator on the topic at hand.

So, yeah: my bad, and I hope y’all will feel free to call me out if you catch me doing it again. (My particular apologies go to Charlie Stross because – contrary to my framing of such – his original post wasn’t intended to “start a fight” at all, but I’ve doubtless misrepresented other people’s positions as well, so consider this a blanket apology to all concerned.)

So, let’s get back to rounding up bits of this debate. The core discussion consisting of responses to Stross and counter-responses to such [see previous posts] seems to have burned out over the last seven days, which isn’t entirely surprising, as both sides are arguing from as-yet-unprovable philosophical positions on the future course of science and technology. (As I’ve said before, I suspect *any* discussion of the Technological Singularity or emergent GAI is inherently speculative, and will remain such unless/until either of them occur; that potentiality, as I understand it, informs a lot of the more serious Singularitarian thinking, which I might paraphrase as saying “we can’t say it’s impossible with absolute certainty, and given the disruptive potential of such an occurance, we’d do well to spare some thought to how we might prevent it pissing in our collective punchbowl”.)

The debate continues elsewhere, however. Via Tor.com, we find an ongoing disagreement between Google’s Director of Research Peter Norvig and arch-left-anarchist linguist Noam Chomsky over machine learning methodologies. As I understand it, Chomsky rejects any attempt to recreate a system without and attempt to understand why and how that system works the way it does, while Norvig – not entirely surprisingly, given his main place-of-employment – reckons that statistical analysis of sufficiently large quantities of data can produce the same results without the need for understanding why things happen that way. While not specifically a Singularitarian debate, there’s a qualitative similarity here: two diametrically opposed speculative philosophical positions on an as-yet unrealised scientific possibility.

Elsewhere, Jamais Cascio raises his periscope with a post that prompted my apology above. Acknowledging the polar ends of the futurist spectrum – Rejectionism (the belief that we’re dooming ourselves to destruction by our own technologies) and Posthumanism (the technoutopian assumption that technology will inevitably transform us into something better than what we already are) – he suggests that both outlooks are equally destructive, because they relieve us of the responsibility to steer the course of the future:

The Rejectionist and Posthumanist arguments are dangerous because they aren’t just dueling abstractions. They have increasing cultural weight, and are becoming more pervasive than ever. And while they superficially take opposite views on technology and change, they both lead to the same result: they tell us to give up.

By positing these changes as massive forces beyond our control, these arguments tell us that we have no say in the future of the world, that we may not even have the right to a say in the future of the world. We have no agency; we are hapless victims of techno-destiny. We have no responsibility for outcomes, have no influence on the ethical choices embodied by these tools. The only choice we might be given is whether or not to slam on the brakes and put a halt to technological development — and there’s no guarantee that the brakes will work. There’s no possible future other than loss of control or stagnation.

[…]

Technology is part of who we are. What both critics and cheerleaders of technological evolution miss is something both subtle and important: our technologies will, as they always have, make us who we are—make us human. The definition of Human is no more fixed by our ancestors’ first use of tools, than it is by using a mouse to control a computer. What it means to be Human is flexible, and we change it every day by changing our technology. And it is this, more than the demands for abandonment or the invocations of a secular nirvana, that will give us enormous challenges in the years to come.

I think Jamais is on to something here, and the unresolvable polarities of the debates we’ve been looking at underline his point. Here as in politics, the continuing entrenchment of opposing ideologies is creating a deadlock that prevents progress, and the framing of said deadlock as a fight is only bogging things down further. There’s a whole lot of conceptual and ideological space between these polar positions; perhaps we should be looking for our future in that no-man’s-land, before it turns into the intellectual equivalent of the Western Front circa 1918.

Reputation management services

If I were a bright-eyed huckster with a sharp suit and few morals (or should that be fewer?), online reputation management would be one of the business models I’d be thinking about putting into action. For as they say in Yorkshire, “where there’s muck, there’s brass”… quite a lot of brass, in fact, if this NYT piece is to be believed:

Reputation.com advertises an annual membership fee of $99, but Mr. Fertik said that costs could easily reach $10,000 for a prominent person who wanted to make a scandal harder to discover through Internet searches. (He said Mr. Weiner was probably out of luck: “It would take a long time and more money than he has.”)

For the detective work, the costs escalate quickly. Michael J. Hershman, president of the Fairfax Group, a risk and reputation management firm, said burying negative information could cost $500 to $1,000, but persuading search engines to expunge incorrect information could cost several thousand dollars more. Getting that information removed from aggregating Web sites like Intellius or PeopleFinder can add another couple of thousand dollars.

Costs can spike into five figures when a firm is asked to find the people responsible for the defamatory blog post or Twitter message. “If you’re going to hire a firm like ours to find that person, it’s hit or miss,” Mr. Hershman said. “We can’t guarantee success. It’s not as easy as going to the search engines.”

There’s a pretty obvious parallel here with the UK-centric phenomenon of super- and hyper-injunctions; the traditional privacy of the rich and privileged is becoming harder and more difficult to maintain in the face of network culture. (This is, if I understand it correctly, the intended meaning of the old saw about “information wanting to be free”, rather than as a dubious ideological justification for content piracy.)

It remains to be seen whether power and money will win the battle in the long run; it probably won’t surprise regular readers to know that I rather hope it doesn’t, because that would mean the rich and powerful would be obliged to think about the potential fallout from their indiscretions before committing them, or face the consequences like everyone else.

The flipside is that life-damaging falsehoods can proliferate with equal ease, deliberately or accidentally, and that corrections to erroneous reports rarely have the same high profile or link-back rate as the initial reports themselves. That said, the nature of network culture suggests that concerted efforts to publicise truth and retractions are likely to be just as effective as deliberate smears or falsehoods propagated with the same degree of effort. As more and more raw data and evidence becomes part of the online ecosystem, it should in theory be possible to defend the truth more effectively as time goes by… but that discounts the regrettable realities of confirmation bias. As so many sensitive topics demonstrate – from global-level biggies like climate science, all the way down to gender representation disparities in science fiction publishing – no amount of data will convince those who simply don’t want to be convinced.

At this point in my thought-train, it’s time to bring in that small yet hardy perennial of geek-futurist topics, the reputation currency. These are still in their infancy, and as such are very open to gaming and logrolling; Amazon review ratings, for instance, vary wildly in their utility from product to product, though the eBay system is a little more robust and trustworthy, provided one does one’s due diligence. But that’s the key, I suspect; in the same way that I think we have to take responsibility for regulating the behaviour of corporations by thinking carefully about where we spend our money and/or attention-time, I think it’s also down to us to make sure we only trust systems that are trustworthy.

Easier said than done, of course, as it would require a pretty fundamental shift in attitudes toward who is responsible for protecting us from the more miscreant members of the species. But there’s another topical example that provides a potential model for  a currency of trustworthiness, and that’s BitCoin. Only a trust currency would actually have to be a sort of mirror image of BitCoin, in that it would have to be completely transparent at the transaction level, with every exchange documented and verified by the cloud of peers. (Whether such a system could ever scale to a global or even national level is way beyond my limited grokking of cryptotech to grasp; it’s a subject I really need to dig into properly at my soonest opportunity.)

In the short- to medium-term, however, I think we can expect to see reputation management become an increasingly expensive and cut-throat theatre of business, alongside a broad swathe of attempts to reinstate the privilege of privacy using the statute books. With any luck, though, the continual exposure of politicians and celebrities as having the same suite of flaws and stupidities as the rest of us might eventually encourage us to look past the headlines and start asking the questions that really matter… namely what these people do when they’re actually at work on our dime.