Technology as brain peripherals

Paul Raven @ 15-12-2010

Via George Dvorsky, a philosophical push-back against that persistent “teh-intarwebz-be-makin-uz-stoopid” riff, as espoused by professional curmudgeon Nick Carr (among others)… and I’m awarding extra points to Professor Andy Clark at the New York Times not just for arguing that technological extension or enhancement of the mind is no different to repair or support of it, but for mentioning the lyrics to an old Pixies tune. Yes, I really am that easily swayed*.

There is no more reason, from the perspective of evolution or learning, to favor the use of a brain-only cognitive strategy than there is to favor the use of canny (but messy, complex, hard-to-understand) combinations of brain, body and world. Brains play a major role, of course. They are the locus of great plasticity and processing power, and will be the key to almost any form of cognitive success. But spare a thought for the many resources whose task-related bursts of activity take place elsewhere, not just in the physical motions of our hands and arms while reasoning, or in the muscles of the dancer or the sports star, but even outside the biological body — in the iPhones, BlackBerrys, laptops and organizers which transform and extend the reach of bare biological processing in so many ways. These blobs of less-celebrated activity may sometimes be best seen, myself and others have argued, as bio-external elements in an extended cognitive process: one that now criss-crosses the conventional boundaries of skin and skull.

One way to see this is to ask yourself how you would categorize the same work were it found to occur “in the head” as part of the neural processing of, say, an alien species. If you’d then have no hesitation in counting the activity as genuine (though non-conscious) cognitive activity, then perhaps it is only some kind of bio-envelope prejudice that stops you counting the same work, when reliably performed outside the head, as a genuine element in your own mental processing?

[…]

Many people I speak to are perfectly happy with the idea that an implanted piece of non-biological equipment, interfaced to the brain by some kind of directly wired connection, would count (assuming all went well) as providing material support for some of their own cognitive processing. Just as we embrace cochlear implants as genuine but non-biological elements in a sensory circuit, so we might embrace “silicon neurons” performing complex operations as elements in some future form of cognitive repair. But when the emphasis shifts from repair to extension, and from implants with wired interfacing to “explants” with wire-free communication, intuitions sometimes shift. That shift, I want to argue, is unjustified. If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too. And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links your brain to your notepad, BlackBerry or iPhone) must be acceptable too. What counts is the flow and alteration of information, not the medium through which it moves.

Lots of useful ideas in there for anyone working on a new cyborg manifesto, I reckon… and some interesting implications for the standard suite of human rights, once you start counting outboard hardware as part of the mind. (E.g. depriving someone of their handheld device becomes similar to blindfolding or other forms of sensory deprivation.)

[ * Not really. Well, actually, I dunno; you can try and convince me. Y’know, if you like. Whatever. Ooooh, LOLcats! ]


We can forget it for you wholesale

Paul Raven @ 23-09-2010

Via Technovelgy comes news of progress in erasing memories using drugs, a mechanism independent of the way said memories actually form. Admittedly, the state of the art so far appears to be making fruit flies forget that certain smells coincided with a shock administered to one of their legs, but hey, you gotta start somewhere…

Fans of memory-erasure should check out Marissa Lingen’s subtle yet highly affecting Futurismic story “Erasing The Map” from back in February 2009. Her fictional erasure is surgical, not chemical, but the moral questions care nothing for the methods used…


Friday philosophy: mind/body dualism

Paul Raven @ 27-08-2010

Thinking caps on, folks. Using a science fictional premise as a framing device, philosopher Daniel Dennett ponders the question “if your mind and body were separated, which one would be ‘you’?” [via MetaFilter]

“Yorick,” I said aloud to my brain, “you are my brain.  The rest of my body, seated in this chair, I dub ‘Hamlet.’”  So here we all are:  Yorick’s my brain, Hamlet’s my body, and I am Dennett.  Now, where am I?  And when I think “where am I?”, where’s that thought tokened?  Is it tokened in my brain, lounging about in the vat, or right here between my ears where it seems to be tokened?  Or nowhere?  Its temporal coordinates give me no trouble; must it not have spatial coordinates as well?  I began making a list of the alternatives.

It’s a seventies-vintage essay, so the frame plot is a bit hokey, but the philosophical conundrum still packs a mean punch. Don’t read it if you’ve got anything complicated you’re meant to think about for the rest of the day. 🙂


Personality back-ups: immortality through avatars?

Paul Raven @ 11-06-2010

The possibility of digitising the human mind is one of those questions that will only be closed by its successful achievement, I think; there’ll always be an argument for its possibility, because the only way to disprove it would be to quantify how personality and mind actually work, and if we could quantify it, we could probably work out a way to digitise it, too. (That said, if someone can chop a hole in my logic train there, I’d be genuinely very grateful to them, because it’s a question that’s bugged me for years, and I haven’t been able to get beyond that point with my bootstrap philosophy chops.)

Philosophical digressions aside, low-grade not-quite-proof-of-concept stuff seems to be the current state of the industry. Via NextNature, New Scientist discusses a few companies trying to capture human personality in computer software:

Lifenaut’s avatar might appear to respond like a human, but how do you get it to resemble you? The only way is to teach it about yourself. This personality upload is a laborious process. The first stage involves rating some 480 statements such as “I like to please others” and “I sympathise with the homeless”, according to how accurately they reflect my feelings. Having done this, I am then asked to upload items such as diary entries, and photos and video tagged with place names, dates and keywords to help my avatar build up “memories”. I also spend hours in conversation with other Lifenaut avatars, which my avatar learns from. This supposedly provides “Linda” with my mannerisms – the way I greet people or respond to questions, say – as well as more about my views, likes and dislikes.

A more sophisticated series of personality questionnaires is being used by a related project called CyBeRev. The project’s users work their way through thousands of questions developed by the American sociologist William Sims Bainbridge as a means of archiving the mind. Unlike traditional personality questionnaires, part of the process involves trying to capture users’ values, beliefs, hopes and goals by asking them to imagine the world a century in the future. It isn’t a quick process: “If you spent an hour a day answering questions, it would take five years to complete them all,” says Lori Rhodes of the nonprofit Terasem Movement, which funds CyBeRev. “But the further you go, the more accurate a representation of yourself the mind file will become.”

It’s an interesting article, so go take a look. This little bit got me thinking:

So is it possible to endow my digital double with a believable representation of my own personality? Carpenter admits that in order to become truly like you, a Lifenaut avatar would probably need a lifetime’s worth of conversations with you.

Is that a tacit admission that who we are, at a fundamental level, is a function of everything we’ve ever done and experienced? That to record a lifetime’s worth of experiences and influences would necessarily take a lifetime? Emotionally, I find myself responding to that idea as being self-evident… and it’s the intuitive nature of my response that tells me I should continue to question it.


No fate but what we make… or maybe not. Is free will an illusion?

Paul Raven @ 08-03-2010

A dilemmaBiology professor Anthony Cashmore at the University of Pennsylvania reckons that free will is illusory, and that believing in it is something akin to religious faith:

One of the basic premises of biology and biochemistry is that biological systems are nothing more than a bag of chemicals that obey chemical and physical laws. Generally, we have no problem with the “bag of chemicals” notion when it comes to bacteria, plants, and similar entities. So why is it so difficult to say the same about humans or other “higher level” species, when we’re all governed by the same laws?

As Cashmore explains, the human brain acts at both the conscious level as well as the unconscious. It’s our consciousness that makes us aware of our actions, giving us the sense that we control them, as well. But even without this awareness, our brains can still induce our bodies to act, and studies have indicated that consciousness is something that follows unconscious neural activity. Just because we are often aware of multiple paths to take, that doesn’t mean we actually get to choose one of them based on our own free will. As the ancient Greeks asked, by what mechanism would we be choosing? The physical world is made of causes and effects – “nothing comes from nothing” – but free will, by its very definition, has no physical cause.

All of a sudden, I’m reminded of Nick Bostrom’s simulation argumentperhaps the reason we can’t see a mechanism for free will is that we’re not actually real? Which is a heavy thought for a Monday morning… compare and contrast with Luc Reid’s summary of the neuroscience status quo, and it’s plain to see there’s a whole lot we just plain don’t understand. [image by Julia Manzerova]

Personally, I’m currently leaning somewhat toward the idea that our consciousnesses are enabled by quantum effects caused by entanglement with near-identical minds in universes closely similar to our own… but that probably has more to do with the fact that I finally finished reading Neal Stephenson’s Anathem last week than anything else.


Next Page »