Tag Archives: sentience

How we Relate to Animals

So…last month I explored progress with stem cells. I plan to return to the futuristic medicine topic again soon, but this month I decided to talk about animals.

We have three dogs: a golden retriever and two border collies. The border collies are wicked smart. I’m pretty sure that across some narrow bands they are smarter than we are. For example, they can manipulate us into behaving the way they want pretty effectively – they’re herding dogs, after all. Sometimes they’ll get us all gathered together before we even realize it. Other times we know, but they still manipulate us into doing what they want. They have to vary their techniques regularly to keep succeeding. I am a hundred percent confident that smarts, feelings, and sometimes a big chunk of creativity goes into their behaviors. Continue reading How we Relate to Animals

Uplift ethics, round two

Unsurprisingly, there are some responses to my screed from yesterday on the ethics of animal uplift. First up is George Dvorsky’s riposte:

First, when I talk about the “same cognitive gifts that we have,” I am not necessarily suggesting that we humanize non-human animals—though I concede that some human characteristics, such as the capacity for speech and complex recursive language, are important augmentations. More accurately, I am discussing animal uplift in the context of the broader thrust that sees not just humans move away from the Darwinian paradigm, but the entire ecosystem itself. I realize that’s not a small or subtle thing, but eventually our entire planet’s biosphere will come under the auspices of intelligent oversight—what in some circles has been referred to as technogaianism. We are poised to systematically replace a number of autonomous environmental and evolutionary systems with new and improved ones that will see a dramatic reduction in global suffering and a much more vibrant planet. And quite obviously it’ll also be part of our efforts to fix the damage we’ve done thus far to Earth. So, when I talk about enhancing animals, I’m talking about bringing them into the postbiological fold along with us. To just leave the animal kingdom alone to fend for itself seems plain wrong and repugnant to me.

Well, OK, technogaianism seems like an idea I can acknowledge as a net good, but Dvorsky’s confidence in its imminence seems undimmed by the fact that we don’t currently have a global political framework that can ensure every human being gets their fair share of available resources and a say in how things are run. Hell, in a lot of places, that isn’t even available locally – just look at the current (and growing) schism between the political classes and the general populace in Europe and the US at the moment. You think you’re going to be able to set up a global technological framework for regulating the biosphere with even a simple majority consent from the population, given how difficult it is trying to convince people that as blindingly obvious a problem as anthropic global warming is worth taking action for? Good luck with that, seriously.

I mean, I think it’s an attainable goal, but it’s gonna take a lot of work… and a far deeper understanding of the complexity of planet-scale ecosystems than we currently have, not to mention a more inclusive sort of politics that acknowledges and allows for different attitudes to the husbandry of planetary resources. To make a medical analogy, we’re still at the draining-humours-with-leeches stage of planetary management.

At no point do I suggest that we should “leave the animal kingdom alone to fend for itself”. Quite the contrary: we should repair the environments that support it, and – as far as is possible – give it space to exist without any interference from us whatsoever. A safari park planet, if you like… or you could think of it, perhaps, as the biosphere equivalent of declaring a heritage zone for protection. The biosphere gave rise to us, but our sentience does not implicitly grant us mastery over it – merely a custodial duty of care. Might does not make right. Which brings us to Dvorsky’s second point:

Second, and related to the first point, I think many of my detractors must have a very different definition of imperialism than I do. What they see as imperialism (though I’m not exactly sure what they’re suggesting humans are exploiting here) I see as compassion.

Oh, man, come on. The bringing of civilisation to “backwards” natives has always been framed in the rhetoric of compassion and moral duty – it’s all for their own good, right? The exploitation angle always comes after the intervention (though in some cases it may have been an unspoken motivation from the outset). And the last half a century or so is replete with examples of how essentially liberal impulses can still drive essentially imperialistic projects: I refer you first and foremost to America’s earnest but severely misguided (not to mention tragically blundered) attempts to spread the benefits of democracy and corporate capitalism to the developing world. And bear in mind that this has, in a number of cases, been done in places where the recipients of this attempted cultural uplift were able to observe and even desire the more visible trappings of the enfranchisement they were being offered (even if their understanding of the full consequences of said enfranchisement remained opaque, whether deliberately or not).

Shorter version: you can explain the possible benefits of cultural uplift to another human, and give them the choice (though the latter stage has historically been skimped upon more often than not, and the former rarely done as thoroughly and honestly as a clean conscience might require). But with non-human persons, with whom you have only a very limited framework of language through which to communicate extremely complex ideas, you don’t even have the option of warning them what’s to come. Is it not possible that we could get it right, and uplift an extended genetic family of great apes who’d be grateful to us for doing so? I don’t think it’s impossible. But I think we’d be in a much better position to take that chance once we’d demonstrated an ability to uplift our human sisters and brothers to the same position of privilege we already occupy. Don’t run before you can walk, y’know?

I find it interesting how many critics of uplift call upon Western norms and taboos to make their case, while my ethics is almost exclusively informed by Eastern philosophies, namely Buddhism. I look at animal uplift in the same way I do any other compassionate act in which a human or non-human animal is pulled-up from deplorable conditions, whether it be extreme poverty, or having to survive alone in the jungle.

Right, I’m no zoologist, but I think this portrayal of apes in misery “having to survive alone in the jungle” is anthropocentrism writ large. How can you be sure that the apes aren’t completely happy in the environment that they evolves to inhabit, or with the society and culture they’ve developed as a result? Sure, nature’s red in tooth and claw, and I’m not naive enough to think that apes – or any other animal – live in some sort of bucolic Eden. But who are we to decide on their behalf that a more human lifestyle would be preferable to them? I dare say it probably would be if the project of uplift succeeded in humanizing them, but again, you’d have made that decision to change their state of being on their behalf, because you’re so certain that human consciousness is the known peak of sentience. And of course you’re certain! I dare say if you could ask a well-fed dog in the midst of running after a thrown stick whether everyone would prefer to be a dog, they’d enthusiastically agree with your suggestion. Privilege breeds conceit.

Let’s try it another way: if you’re making an argument that apes should already have the rights of personhood conferred upon them, how can you not include the fundamental right of a person not to have major changes to their state of being made to them without their express consent? You can’t have your cake and eat it, guys; either apes are persons already, and hence deserving of your protection from those who would meddle with their state of being, or they’re not yet persons, and you’re making the indubitably anthropocentric assumption that the human state of being is superior to what they have already, and that they’d surely thank you for being raised to it.

Perhaps the latter is true, but here’s the thing – you only get to find out after you’ve done it. Our philosophical difference here is over whether that risk is a reasonable one to take given the potential rewards of the outcome. What worries me most about sitting down to do that particular bit of moral calculus is that while all the potential gain would accrue to the uplifted apes, so would all the potential risk.

You must not play god with the state of being of an entire species. Put the shoe on the other foot for a moment, and imagine the arrival of an alien species so far in advance of our own state of being that their motives, philosophies and moral framework are completely incomprehensible to us. We can see that they have conquered various technical and scientific problems which have thus far eluded us; as far as we can tell without being able to actually immerse ourselves in their culture, they seem happy and fecund and fulfilled, though their long-term goals are completely inscrutable, and they do many things that make no sense to us at all.

Now imagine said alien race starts plucking up a few randomly picked humans with the intent of making them more like the aliens. (This is, I believe, the basic concept of Octavia Butler’s Xenomorph series of novels, which are unquestionably postcolonial texts.) The end result is something neither human nor alien, but something in between, something carrying the legacy of a sociobiological experiment in which they had no say; something caught between two preexisting cultures, sprung from both, belonging to neither. Deliberately or not, you create an outsider species. Being as familiar with human emotions and attitudes as you must be (what with being one) can you really imagine your uplifted people having no resentment of this in-between state of being? Perhaps you can, but if that’s the case I humbly suggest you’ve had a very fortunate and privileged life already, and that doesn’t put you in a very good position for empathising with those who’ve not been so lucky in a manner that doesn’t – quite unintentionally – come out as condescension.

I’m going to issue a challenge to the opponents of animal uplift: Go back and live in the forest. I mean it. Reject all the technological gadgetry in your possession and all the institutions and specialists you’ve come to depend on. Throw away your phones, your shoes, your glasses and your watches. Denounce your education. As I’m sure I don’t have to remind anybody, it’s these things that have uplifted humanity from it’s more primitive “natural” state. Humans haven’t been truly human for thousands of years; we’ve been transhuman for quite some time now. If you reject animal uplift, then you must reject your very own transhuman condition.

Yeah, like that’s going to happen. Pretty easy to dismiss uplift from the position of privilege, isn’t it? Who’s the real imperialist, here?

I’ve been reading your stuff for many years now, George, and I really thought your rhetorical chops were up to a higher standard than this: a rough equivalent of “if you think life in Islamic Afghanistan is so awesome and deserving of protection, why don’t you go live there, huh?” (At least you’ve not gone so far as to wave whatever the uplift equivalent of the “it’s political correctness gone MAD!!!” banner might be.) Far from disproving my accusations of imperialist attitudes, you’ve actually strengthened them with this implicit labelling of ape culture as inferior to our own, as something they must be rescued from for their own good – after all, you’d find it impossible to cope with, so therefore it must be bad, and your life must hence be better!

The motive is pure, I’ll grant you – noble, even. But we all know what the road to hell is paved with, and you only have to look at Afghanistan (and Iraq, and countless other “backward” nation-states that have been thoroughly mangled by the neoliberal project to deliver Western-style cultural freedoms and economic liberty to places where it appeared to be lacking) to see plenty of examples that the liberal imperialist impulse is just as prone to enslaving or subjugating those it intends to uplift as the older monarchic imperialisms were.

I’m not suggesting that’s a deliberate outcome, mind you; I’m suggesting it’s a function of the inherently hierarchical way of looking at sentience that is powering this “obligation” to uplift. If you see sentience as a ladder with us stood on its highest rung and the apes a few rungs further down, then of course you’re going to feel you should pull them up the last few steps once you’ve clambered off onto the plateau at the top. But the anthropomorphic assumption here is that apes are as interested in climbing that cognitive ladder as you are. Heck, I’d bet you good money less than half your fellow humans would agree that humans climbing further up that ladder is an unmitigated good thing, and at least there you have the chance to make your case to someone who can potentially understand it. With the apes, you’re simply assuming your moral calculus will make them happy in the long run; as such, I return to my original diagnosis of well-intentioned hubris.

Ultimately, my argument boils down to this: if you truly believe that apes are human-like enough to deserve equivalent rights to us – a point on which I cautiously agree, I might add – then the first and greatest of those rights is the right not to have a new way of life forced upon you, whether “for your own good” or otherwise. Volition has to be a cornerstone of personhood. If it isn’t, where does volition enter the equation of sentience and ethics, exactly? This is a central question of postcolonial theory, and one which, I respectfully submit, we have not adequately answered in the context of our own species, let alone that of our genetic cousins.

This post is already running long (and eating a large chunk of my day), so I’ll leave discussing methods by which uplift might be achieved while still granting volition to its subjects for another day… though I will briefly note this part of Kyle Munkittrick’s response to my original post in that context:

My hope is that uplift technology will be based on our own human cognitive enhancement technology. Tech that enhances the mind as-is will enable animals to be more intelligent without altering their genes such that we change how an animal’s brain works. Animals uplifted in this way would contribute to neurodiversity and make Earth home to more than just one intelligent species.

OK, if you make the tools of enhancement non-invasive and volitional – to use a crude sf-nal example, by leaving brain-booster headsets laying around for apes to find and experiment with, if they so chose – then we’re talking about a very different ballgame. (And that gives me another opportunity to mention a favourite science fiction work in which that is one of the strands, namely Julian May’s Saga of Pliocene Exile…)

Uplift ethics and transhuman hubris

There’s a little splash of uplift-related news around the place, thanks to the topic-initiating power of a new documentary film which you may well have already seen mentioned elsewhere: Project Nim tells the story of Nim Cimpsky, the subject of an experiment intended to disprove Chomsky’s assertion that language is unique to human. Here’s the trailer:

Here’s an interview with the film’s director, James Marsh, at The Guardian:

“The nature-versus-nurture debate clearly was part of the intellectual climate of that time and remains an interesting question – how much we are born a certain way, as a species and as individuals. In Nim’s case, he has a chimpanzee’s nature and that nature is an incredibly forceful part of his life. What [the scientists] try to do is inhibit his nature and you see the results in the story.

“I was intrigued because I hadn’t seen that in a film before, the idea of telling an animal’s life from cradle to grave using the same techniques as you would use for a human biography.”

Marsh admits that conveying Nim’s experiences was tough. “The overlap between the species [human and chimpanzee] does involve emotions. But at the same time I was very wary of those from the get-go. I felt that Nim’s life had been blighted by people projecting on to him human qualities and trying to make him something that he wasn’t.”

Meanwhile, George Dvorsky links to a piece about a report from the Academy of Medical Science that calls for new rules to govern research into “humanising animals”, though specifically a more invasive and biological fashion than Project Nim:

Professor Thomas Baldwin, a member of the Academy of Medical Sciences working group that produced the report, said the possibility of humanised apes should be taken seriously.

“The fear is that if you start putting very large numbers of human brain cells into the brains of primates suddenly you might transform the primate into something that has some of the capacities that we regard as distinctively human.. speech, or other ways of being able to manipulate or relate to us,” he told a news briefing in London.

“These possibilities that are at the moment largely explored in fiction we need to start thinking about now.”

Prof Baldwin, professor of philosophy at the University of York, recommended applying the “Great Ape Test”. If modified monkeys began to acquire abilities similar to those of chimpanzees, it was time to “hold off”.

“If it’s heading in that direction, red lights start flashing,” said Prof Baldwin. “You really do not want to go down that road.”

Dvorsky, a dyed-in-the-wool transhumanist, disagrees:

I’m just as concerned as anyone about the potential for abuse, particularly when animals are used in scientific experiments. But setting that aside, and assuming that cognitive enhancement could be done safely on non-human primates, there’s no reason why we should fear this. In fact, I take virtually the opposite stance to this report. I feel that humanity is obligated to uplift non-human animals as we simultaneously work to uplift ourselves (i.e. transhumanism).

Reading this report, I can’t help but feel that human egocentricity is driving the discussion. I sincerely believe that animal welfare is not the real issue here, but rather, ensuring human dominance on the planet.

Here we run into another reason why I’m a fellow-traveller and chronicler of transhumanism and not a card-carrier, because Dvorsky’s logic seems completely inverted to me. Is it not far more human-egocentric to view ourselves as the evolutionary pinnacle that all animals would aspire to achieve, were they but able to aspire? To make that decision on their behalf, on the basis of our own inescapably human-centric system of value-judgements?

Ultimately, we have to ask ourselves, why wouldn’t we wish to endow our primate cousins with the same cognitive gifts that we have?

Because they are not us. We are related, certainly, this much is inescapable, but a chimpanzee is not a human being, and to insist that uplift is a moral duty is to enshrine the inferiority-to-us of the great apes, not to sanctify their uniqueness. This is the voice of assimilation, the voice of homogenisation, the voice of empire. It is the voice of colonialist arrogance, and a form of species fascism. If we have any moral duty toward our genetic cousins, it is to protect them from the ravages we have committed on the world they have always lived in balance with. Why raise them up to our hallowed state of consciousness if all they stand to inherit is a legacy of a broken planet and a political framework that legitimises the exploitation of those considered to carry a debt to society’s most powerful?

Because make no mistake, even were we able to endow chimpanzees with the same cognitive powers as ourselves, we would still find reasons not to enfranchise them fully. If you can look at the disparities in enfranchisement of different human races and classes and genders in this world that still persist to this day, despite the lip-service liberalism of the privileged Western world to the contrary, and not see that life for uplifted apes would be a condition of slavery to science for science’s own sake (at the very best): a lifetime of being a bug in a glass jar, a curiosity and a joke and an object of pity… well, you can evidently look at the world very differently to how I can. In my world, that’s high-order hubris.

Dvorsky has another post which discusses more recent attempts at “cultural uplift”, which seems to be a more modern and ethically grounded update of Project Nim; while certainly more palatable than more directly biological interventions in animal cognition, I still feel there’s an arrogant flaw in assuming that human culture is superior (and hence obligatory) to an animal’s naturally evolved culture. Am I engaging in a sort of Noble Savage argument here, claiming that ape inferiority should be preserved in order that I can continue feeling superior to it? I don’t believe I am. You can only throw the Noble Savagery claim at me if you claim that there is already no value-difference between human culture and ape culture, and that apes are deserving of the same rights as man… at which point you not only concede the point I’m trying to make, but you also concede that you have no moral or cultural high-ground from which to decide that ape culture is inferior.

Apes are special, because they are so similar to us in so many ways; on this I think we can all agree. But to uplift them would not be an act of protecting and awarding that specialness; it would be, consciously or otherwise, an act of erasure, an attempt to equalise the specialness differential and make them just the same as us.

And that is human egocentricity in action – the same egocentricity whose trackmarks can be seen on the skin of the planet that gave rise to it, and whose roots are in a deep-seated envy and resentment of the innocence that is the true core of the difference between us and the great apes. It is that innocence that uplifting would erase; do you think an ape that thought like a human wouldn’t resent our theft of that innocence? Or would you keep them ignorant of the state they existed in before uplift? Immediately, inevitably, you create the conditions whereby you are obliged to treat these newly-minted man-apes in a less free condition than the one you have claimed to raise them up to.

To assume that we know what is good for an ape better than an ape itself is an act of spectacular arrogance, and no amount of dressing it up in noble colonial bullshit about civilising the natives will conceal that arrogance.

Furthermore, that said dressing-up can be done by people who frequently wring their hands over the ethical implications of the marginal possibility of sentient artificial intelligences getting upset about how they came to be made doesn’t go a long way toward defending the accusations of myopic technofetish, body-loathing and silicon-cultism that transhumanism’s more vocal detractors are fond of using.

H+ zero-day vulnerabilities, plus cetacean personhood

Couple of interesting nuggets here; first up is a piece from Richard Yonck at H+ Magazine on the risks inherent to the human body becoming an augmented and extended platform for technologies, which regular readers will recognise as a fugue on one of my favourite themes, Everything Can And Will Be Hacked. Better lock down your superuser privileges, folks…

In coming years, numerous devices and technologies will become available that make all manner of wireless communications possible in or on our bodies. The standards for Body Area Networks (BANs) are being established by the IEEE 802.15.6 task group. These types of devices will create low-power in-body and on-body nodes for a variety of medical and non-medical applications. For instance, medical uses might include vital signs monitoring, glucose monitors and insulin pumps, and prosthetic limbs. Non-medical applications could include life logging, gaming and social networking. Clearly, all of these have the potential for informational and personal security risks. While IEEE 802.15.6 establishes different levels of authentication and encryption for these types of devices, this alone is no guarantee of security. As we’ve seen repeatedly, unanticipated weaknesses in program logic can come to light years after equipment and software are in place. Methods for safely and securely updating these devices will be essential due to the critical nature of what they do. Obviously, a malfunctioning software update for something as critical as an implantable insulin pump could have devastating consequences.

Yonck then riffs on the biotech threat for a while; I’m personally less worried about the existential risk of rogue biohackers releasing lethal plagues, because the very technologies that make that possible are also making it much easier to defeat those sorts of pandemics. (I’m more worried about a nation-state releasing one by mistake, to be honest; there’s precedent, after all.)

Of more interest to me (for an assortment of reasons, not least of which is a novel-scale project that’s been percolating at the back of my brainmeat for some time) is his examination of the senses as equivalent to ‘ports’ in a computer system; those I/O channels are ripe for all sorts of hackery and exploits, and the arrival of augmented reality and brain-machine interfaces will provide incredibly tempting targets, be it for commerce or just for the lulz. Given it’s taken less than a week for the self-referential SEO hucksters and social media gurus douchebags to infest the grouting between the circles of Google+, forewarned is surely forearmed… and early-adopterdom won’t be much of a defence. (As if it ever was.)

Meanwhile, a post at R U Sirius’ new zine ACCELER8OR (which, given its lack of by-line, I assume to be the work of The Man Himself) details the latest batch of research into advanced sentience in cetaceans. We’ve talked about dolphin personhood before, and while my objections to the enshrinement of non-human personhood persist (I think we’re wasting time by trying to get people to acknowledge the rights of higher animals when we’ve still not managed to get everyone to acknowledge the rights of their fellow humans regardless of race, creed or class) it’s still inspiring and fascinating to consider that, after years of looking into space for another sentient species to make contact with, there’s been one swimming around in the oceans all along.

Dovetailing with Yonck’s article above, this piece extrapolates onward to discuss the emancipation of sentient machines. (What if your AI-AR firewall system suddenly started demanding a five-day working week?)

A recent Forbes blog poses a key question on the issue of AI civil rights: if an AI can learn and understand its programming, and possibly even alter the algorithms that control its behavior and purpose, is it really conscious in the same way that humans are? If an AI can be programmed in such a fashion, is it really sentient in the same way that humans are?

Even putting aside the hard question of consciousness, should the hypothetical AIs of mid-century have the same rights as humans?  The ability to vote and own property? Get married? To each other? To humans? Such questions would make the current gay rights controversy look like an episode of “The Brady Bunch.”

Of course, this may all a moot point given the existential risks faced by humanity (for example, nuclear annihilation) as elucidated by Oxford philosopher Nick Bostrom and others.  Or, our AIs actually do become sentient, self-reprogram themselves, and “20 minutes later,” the technological singularity occurs (as originally conceived by Vernor Vinge).

Give me liberty or give me death? Until an AI or dolphin can communicate this sentiment to us, we can’t prove if they can even conceptualize such concepts as “liberty” or “death.” Nor are dolphins about to take up arms anytime soon even if they wanted to — unless they somehow steal prosthetic hands in a “Day of the Dolphin”-like scenario and go rogue on humanity.

It would be mighty sad were things to come to that… but is anyone else thinking “that would make a brilliant movie”?

What non-human rights are really about

The issue of basic rights for the higher animals pops up with a certain regularity, especially in transhumanist circles; here’s George Dvorsky responding to some of the more usual objections:

The rights I’m talking about have to do with protections. Nonhuman animals, like humans, should be immune from undue confinement, abuse, experimentation, illicit trafficking, and the threat of unnatural death. And I’m inclined to leave it at that for now.

While these animals may not be as intelligent or knowledgeable as humans, their cognitive and emotional capacities are sophisticated enough to warrant special consideration. These are self-aware and self-reflexive animals. They are cognizant of other minds, exhibit deep emotional responses, and have profound social attachments. That’s not to be taken lightly.

At the same time I acknowledge that there there has to be a realism applied to this issue. Nonhuman animals who qualify as persons cannot participate in society to the same degree that humans can. Thus, they should be considered and treated in the same manner that we do children and the developmentally disabled—which is that they still have rights! We would never experiment upon a 3-year old human child, nor would we force a mentally disabled person to perform in a circus. We believe this because we recognize that these individuals are endowed with (or have the potential for) the sufficient capacities required for personhood. Consequently, we protect them with laws.

For what it’s worth, I’m in agreement with Dvorsky on most of his points here, though I think the biggest roadblock to non-human rights is our incomplete provision of human rights. Until we live in a world where we genuinely treat all human beings – regardless of race, gender, physical or mental ability, attractiveness, intelligence or lack of privilege – as our equals (biological, economic and political), how can we ever hope to extend that parity to creatures whose existence we definitively can’t empathise with on the basis of experience? (Indeed, some of the more extreme animal rights advocates seem far more able to empathise with the suffering of animals than the emotions of their fellow humans, and as such have done their cause far more harm than good.)

I totally agree that we should be looking to protect non-human sentients from exploitation, but attempting to do so before we’ve flattened the human playing field is to put the cart before the horse and then wonder that the cart doesn’t respond to the whip. Look to the plank in one’s own eye, and all that.