Tag Archives: cognition

Uplift ethics and transhuman hubris

There’s a little splash of uplift-related news around the place, thanks to the topic-initiating power of a new documentary film which you may well have already seen mentioned elsewhere: Project Nim tells the story of Nim Cimpsky, the subject of an experiment intended to disprove Chomsky’s assertion that language is unique to human. Here’s the trailer:

Here’s an interview with the film’s director, James Marsh, at The Guardian:

“The nature-versus-nurture debate clearly was part of the intellectual climate of that time and remains an interesting question – how much we are born a certain way, as a species and as individuals. In Nim’s case, he has a chimpanzee’s nature and that nature is an incredibly forceful part of his life. What [the scientists] try to do is inhibit his nature and you see the results in the story.

“I was intrigued because I hadn’t seen that in a film before, the idea of telling an animal’s life from cradle to grave using the same techniques as you would use for a human biography.”

Marsh admits that conveying Nim’s experiences was tough. “The overlap between the species [human and chimpanzee] does involve emotions. But at the same time I was very wary of those from the get-go. I felt that Nim’s life had been blighted by people projecting on to him human qualities and trying to make him something that he wasn’t.”

Meanwhile, George Dvorsky links to a piece about a report from the Academy of Medical Science that calls for new rules to govern research into “humanising animals”, though specifically a more invasive and biological fashion than Project Nim:

Professor Thomas Baldwin, a member of the Academy of Medical Sciences working group that produced the report, said the possibility of humanised apes should be taken seriously.

“The fear is that if you start putting very large numbers of human brain cells into the brains of primates suddenly you might transform the primate into something that has some of the capacities that we regard as distinctively human.. speech, or other ways of being able to manipulate or relate to us,” he told a news briefing in London.

“These possibilities that are at the moment largely explored in fiction we need to start thinking about now.”

Prof Baldwin, professor of philosophy at the University of York, recommended applying the “Great Ape Test”. If modified monkeys began to acquire abilities similar to those of chimpanzees, it was time to “hold off”.

“If it’s heading in that direction, red lights start flashing,” said Prof Baldwin. “You really do not want to go down that road.”

Dvorsky, a dyed-in-the-wool transhumanist, disagrees:

I’m just as concerned as anyone about the potential for abuse, particularly when animals are used in scientific experiments. But setting that aside, and assuming that cognitive enhancement could be done safely on non-human primates, there’s no reason why we should fear this. In fact, I take virtually the opposite stance to this report. I feel that humanity is obligated to uplift non-human animals as we simultaneously work to uplift ourselves (i.e. transhumanism).

Reading this report, I can’t help but feel that human egocentricity is driving the discussion. I sincerely believe that animal welfare is not the real issue here, but rather, ensuring human dominance on the planet.

Here we run into another reason why I’m a fellow-traveller and chronicler of transhumanism and not a card-carrier, because Dvorsky’s logic seems completely inverted to me. Is it not far more human-egocentric to view ourselves as the evolutionary pinnacle that all animals would aspire to achieve, were they but able to aspire? To make that decision on their behalf, on the basis of our own inescapably human-centric system of value-judgements?

Ultimately, we have to ask ourselves, why wouldn’t we wish to endow our primate cousins with the same cognitive gifts that we have?

Because they are not us. We are related, certainly, this much is inescapable, but a chimpanzee is not a human being, and to insist that uplift is a moral duty is to enshrine the inferiority-to-us of the great apes, not to sanctify their uniqueness. This is the voice of assimilation, the voice of homogenisation, the voice of empire. It is the voice of colonialist arrogance, and a form of species fascism. If we have any moral duty toward our genetic cousins, it is to protect them from the ravages we have committed on the world they have always lived in balance with. Why raise them up to our hallowed state of consciousness if all they stand to inherit is a legacy of a broken planet and a political framework that legitimises the exploitation of those considered to carry a debt to society’s most powerful?

Because make no mistake, even were we able to endow chimpanzees with the same cognitive powers as ourselves, we would still find reasons not to enfranchise them fully. If you can look at the disparities in enfranchisement of different human races and classes and genders in this world that still persist to this day, despite the lip-service liberalism of the privileged Western world to the contrary, and not see that life for uplifted apes would be a condition of slavery to science for science’s own sake (at the very best): a lifetime of being a bug in a glass jar, a curiosity and a joke and an object of pity… well, you can evidently look at the world very differently to how I can. In my world, that’s high-order hubris.

Dvorsky has another post which discusses more recent attempts at “cultural uplift”, which seems to be a more modern and ethically grounded update of Project Nim; while certainly more palatable than more directly biological interventions in animal cognition, I still feel there’s an arrogant flaw in assuming that human culture is superior (and hence obligatory) to an animal’s naturally evolved culture. Am I engaging in a sort of Noble Savage argument here, claiming that ape inferiority should be preserved in order that I can continue feeling superior to it? I don’t believe I am. You can only throw the Noble Savagery claim at me if you claim that there is already no value-difference between human culture and ape culture, and that apes are deserving of the same rights as man… at which point you not only concede the point I’m trying to make, but you also concede that you have no moral or cultural high-ground from which to decide that ape culture is inferior.

Apes are special, because they are so similar to us in so many ways; on this I think we can all agree. But to uplift them would not be an act of protecting and awarding that specialness; it would be, consciously or otherwise, an act of erasure, an attempt to equalise the specialness differential and make them just the same as us.

And that is human egocentricity in action – the same egocentricity whose trackmarks can be seen on the skin of the planet that gave rise to it, and whose roots are in a deep-seated envy and resentment of the innocence that is the true core of the difference between us and the great apes. It is that innocence that uplifting would erase; do you think an ape that thought like a human wouldn’t resent our theft of that innocence? Or would you keep them ignorant of the state they existed in before uplift? Immediately, inevitably, you create the conditions whereby you are obliged to treat these newly-minted man-apes in a less free condition than the one you have claimed to raise them up to.

To assume that we know what is good for an ape better than an ape itself is an act of spectacular arrogance, and no amount of dressing it up in noble colonial bullshit about civilising the natives will conceal that arrogance.

Furthermore, that said dressing-up can be done by people who frequently wring their hands over the ethical implications of the marginal possibility of sentient artificial intelligences getting upset about how they came to be made doesn’t go a long way toward defending the accusations of myopic technofetish, body-loathing and silicon-cultism that transhumanism’s more vocal detractors are fond of using.

Technology as brain peripherals

Via George Dvorsky, a philosophical push-back against that persistent “teh-intarwebz-be-makin-uz-stoopid” riff, as espoused by professional curmudgeon Nick Carr (among others)… and I’m awarding extra points to Professor Andy Clark at the New York Times not just for arguing that technological extension or enhancement of the mind is no different to repair or support of it, but for mentioning the lyrics to an old Pixies tune. Yes, I really am that easily swayed*.

There is no more reason, from the perspective of evolution or learning, to favor the use of a brain-only cognitive strategy than there is to favor the use of canny (but messy, complex, hard-to-understand) combinations of brain, body and world. Brains play a major role, of course. They are the locus of great plasticity and processing power, and will be the key to almost any form of cognitive success. But spare a thought for the many resources whose task-related bursts of activity take place elsewhere, not just in the physical motions of our hands and arms while reasoning, or in the muscles of the dancer or the sports star, but even outside the biological body — in the iPhones, BlackBerrys, laptops and organizers which transform and extend the reach of bare biological processing in so many ways. These blobs of less-celebrated activity may sometimes be best seen, myself and others have argued, as bio-external elements in an extended cognitive process: one that now criss-crosses the conventional boundaries of skin and skull.

One way to see this is to ask yourself how you would categorize the same work were it found to occur “in the head” as part of the neural processing of, say, an alien species. If you’d then have no hesitation in counting the activity as genuine (though non-conscious) cognitive activity, then perhaps it is only some kind of bio-envelope prejudice that stops you counting the same work, when reliably performed outside the head, as a genuine element in your own mental processing?

[…]

Many people I speak to are perfectly happy with the idea that an implanted piece of non-biological equipment, interfaced to the brain by some kind of directly wired connection, would count (assuming all went well) as providing material support for some of their own cognitive processing. Just as we embrace cochlear implants as genuine but non-biological elements in a sensory circuit, so we might embrace “silicon neurons” performing complex operations as elements in some future form of cognitive repair. But when the emphasis shifts from repair to extension, and from implants with wired interfacing to “explants” with wire-free communication, intuitions sometimes shift. That shift, I want to argue, is unjustified. If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too. And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links your brain to your notepad, BlackBerry or iPhone) must be acceptable too. What counts is the flow and alteration of information, not the medium through which it moves.

Lots of useful ideas in there for anyone working on a new cyborg manifesto, I reckon… and some interesting implications for the standard suite of human rights, once you start counting outboard hardware as part of the mind. (E.g. depriving someone of their handheld device becomes similar to blindfolding or other forms of sensory deprivation.)

[ * Not really. Well, actually, I dunno; you can try and convince me. Y’know, if you like. Whatever. Ooooh, LOLcats! ]

Reasons not to worry about brain enhancement drugs

Professor Henry Greely reckons it’s high time (arf!) that we stopped trying to ban cognitive enhancement drugs and focus our attentions on developing rules governing their use [via SentientDevelopments]. It’s a pragmatic approach; as Greely points out, the current grey legality of “revision drugs” like Ritalin isn’t doing anything to stop their use, and as the pharmacological industry introduces more cognition-boosting chemicals onto the market (albeit ostensibly as treatments for various maladies of the mindmeat), that situation is unlikely to reverse itself.

Of course, lots of people are scared of the idea of brain enhancement, and there are some good reasons for that. But there are also some bad (or at least illogical) reasons. take it away, Mr Greely:

There are at least three unsound reasons for concern: cheating, solidarity, and naturalness.

Many people find the assertion that enhancement is cheating to be convincing. Sometimes it is: If rules or laws ban an enhancement, then using it is cheating. But that does not help in situations where there are no rules or the rules are still being determined. The problem with viewing enhancements as cheating is that enhancements, broadly defined, are ubiquitous. If taking a cognitive-enhancement drug before a college entrance exam is cheating, what about taking a prep course? Using a computer program for test preparation? Reading a book about taking the test? Drinking a cup of coffee the morning of the test? Getting a good night’s sleep before the test? To say that direct brain enhancement is inherently cheating is to require a standard of what the “right” competition is. What would be the generally accepted standard in our complex and only somewhat meritocratic society?

The idea of enhancement as cheating is also related to the idea that enhancement replaces effort. Yet the plausible cognitive enhancements would not eliminate the need to study; they would just make studying more effective. In any event, we do not reward effort, we reward success. People with naturally good memories have advantages over others in organic chemistry exams, but they did not work for that good memory.

Some argue that enhancement is unnatural and threatens to take us beyond our humanity. This argument, too, suffers from a major problem. All of our civilization is unnatural. A fair speaker could not fly across a continent, take a taxi to an air-conditioned auditorium, and give a microphone-assisted PowerPoint presentation decrying enhancement as unnatural without either a sense of humor or a good argument for why these enhancements are different. Because they change our physical bodies? So do medicine, good food, clothing, and a hundred other unnatural changes. Because they change our brains? So does education. What argument justifies drawing the line here and not there? A strong naturalness argument against direct brain enhancements, in particular, has not been—and I think cannot be—made. Humans have constantly been changing our world and ourselves, sometimes for better and sometimes for worse. A golden age of unenhanced naturalness is a myth, not an argument.

I’m guessing that most readers here are open to the idea of cognitive enhancement (by whatever method)… but even so, what’s the most compelling argument you’ve heard against it?

The multiphrenic world: Stowe Boyd strikes back on “supertasking”

… which is really a neologism for its own sake (a favourite gambit of Boyd’s, as far as I can tell). But let’s not distract from his radical (and lengthy) counterblast to a New York Times piece about “gadget addiction”, which chimes with Nick Carr’s Eeyore-ish handwringing over attention spans, as mentioned t’other day:

The fear mongers will tell us that the web, our wired devices, and remaining connected are bad for us. It will break down the nuclear family, lead us away from the church, and channel our motivations in strange and unsavory ways. They will say it’s like drugs, gambling, and overeating, that it’s destructive and immoral.

But the reality is that we are undergoing a huge societal change, one that is as fundamental as the printing press or harnessing fire. Yes, human cognition will change, just as becoming literate changed us. Yes, our sense of self and our relationships to others will change, just as it did in the Renaissance. Because we are moving into a multiphrenic world — where the self is becoming a network ‘of multiple socially constructed roles shaping and adapting to diverse contexts’ — it is no surprise that we are adapting by becoming multitaskers.

The presence of supertaskers does not mean that some are inherently capable of multitasking and others are not. Like all human cognition, this is going to be a bell-curve of capability.

As always, Boyd is bullish about the upsides; personally, I think there’s a balance to be found between the two viewpoints here, but – doubtless due to my own citizenship of Multiphrenia – I’m bucking the neophobics and leaning a long way toward the positives. And that’s speaking as someone who’s well aware that he’s not a great multitasker…

But while we’re talking about the adaptivity of the human mind, MindHacks would like to point out the hollowness of one of the more popular buzzwords of the subject, namely neuroplasticity [via Technoccult, who point out that Nick Carr uses the term a fair bit]:

It’s currently popular to solemnly declare that a particular experience must be taken seriously because it ‘rewires the brain’ despite the fact that everything we experience ‘rewires the brain’.

It’s like a reporter from a crime scene saying there was ‘movement’ during the incident. We have learnt nothing we didn’t already know.

Neuroplasticity is common in popular culture at this point in time because mentioning the brain makes a claim about human nature seem more scientific, even if it is irrelevant (a tendency called ‘neuroessentialism’).

Clearly this is rubbish and every time you hear anyone, scientist or journalist, refer to neuroplasticity, ask yourself what specifically they are talking about. If they don’t specify or can’t tell you, they are blowing hot air. In fact, if we banned the word, we would be no worse off.

That’s followed by a list of the phenomena that neuroplasticity might properly be referring to, most of which are changes in the physical structure of the brain rather than cognitive changes in the mind itself. Worth taking a look at.

Maybe it doesn’t matter that the internet is “making us stupid”

High-profile internet-nay-sayer and technology curmudgeon Nick Carr is cropping up all over the place; these things happen when one has a new book in the offing, y’know*. He’s the guy who claims that Google is making us stupid, that links embedded in HTML sap our ability to read and understand written content (cognitive penalties – a penalty that even the British can do properly, AMIRITE?), and much much more.

The conclusions of Carr’s new book, The Shallows – that, in essence, we’re acquiring a sort of attention deficit problem from being constantly immersed in a sea of bite-sized and interconnected info – have been given a few polite kickings, such as this one from Jonah Lehrer at the New York Times. I’ve not read The Shallows yet, though I plan to; nonetheless, from the quotes and reviews I’ve seen so far, it sounds to me like Carr is mapping the age-related degradation of his own mental faculties onto the world as a whole, and looking for something to blame.

I should add at this point that, although I disagree with a great number of Carr’s ideas, he’s a lucid thinker, and well worth reading. As Bruce Sterling points out, grumpy gadfly pundits like Carr are useful and necessary for a healthy scene, because the urge to prove them wrong drives further innovation, thinking, research and development. He’s at least as important and worth reading as the big-name webvangelists… who all naturally zapped back at Carr’s delinkification post with righteous wrath and snark. The joy of being a mere mortal is, surely, to watch from a safe point of vantage while the gods do battle… 😉

But back to the original point: there’s always a trade-off when we humans acquire new technologies or skills, and what’s missing from commentators decrying these apparent losses is any suggestion that we might be gaining something else – maybe something better – as part of the deal; technological symbiosis is not a zero-sum game, in other words. Peripherally illustrating the point, George Dvorsky points to some research that suggests that too good a memory is actually an evolutionary dead end, at least for foraging mammals:

These guys have created one of the first computer models to take into account a creature’s ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

That makes sense. If you rely on that same set of fruit trees for sustenance, then you are in trouble if these trees die or are stripped by rivals. So the constant search for new sources food pays off, even if it consumes large amounts of resources. “The model forager typically spends half of its traveling time revisiting previous places in an orderly way, an activity which is reminiscent of the travel routes used by real animals, ” say Boyer and Walsh.

They conclude that memory is useful because it allows foragers to find food without the effort of searching. “But excessive memory use prevents the forager from updating its knowledge in rapidly changing environments,” they say.

This reminds me of the central idea behind Peter Watts’ Blindsight – the implication that intelligence itself, which we tend to think of as the inevitable high pinnacle of evolutionary success, is actually a hideously inefficient means to genetic survival, and that as such, we’re something of an evolutionary dead end ourselves. Which reminds me in turn of me mentioning evolutionary “arms races” the other day; perhaps, instead of being in an arms race against our own cultural and technological output as a species, we’re entering a sort of counterbalancing symbiosis with it. Should we start considering technology as a part of ourselves rather than a separate thing? Are we not merely a species of cyborgs, but a cyborg species?

[ * The irony here being that almost all the discussion and promotion of Carr’s work that does him any good occurs… guess where? Hint: not in brick’n’mortar bookstores. ]