Tag Archives: outboard

Mo’ memoryhole backlash

That last week saw a whole bunch of Luddite handwringing over a science paper about the internet and its effect on memory is no surprise; what’s surprising (in a gratifying kind of way) is how quickly the counter-responses have appeared… even, in some cases, in the same venues where the original misinterpretations rolled out. This is some sort of progress, surely?

Well, maybe not, and those who don’t want to hear the truth will always find a way to ignore it, but even so. Here’s The Guardian‘s Martin Robbins taking the Daily Fail – and, to a lesser degree, another writer at The Guardian – to task for completely inverting what the report’s author actually said:

Professor Sparrow: “What we do know is that people are becoming more and more intelligent … it’s not the case that people are becoming dumber.”

I don’t get it, Daily Mail Reporter, why would you say that a study claims Google is making us stupid, when the scientist is saying the exact flaming opposite? Did you even watch the video before you embedded it in your article? Or read the study? Oh never mind.

Robbins shouldn’t feel too smug, though, because after pointing out the egregious editorialising of others, he swan-dives straight into the pit of ARGH GOOGLE THOUGH SERIOUSLY THEY’RE EVERYWHERE AND IF WE DON’T REGULATE THEM WE’LL ALL BE SPEAKING IN PERL AND SPENDING GOOGLEDOLLARS NEXT YEAR OMFG YUGUIZE. I guess we all have our confirmation biases, even those of us who know and recognise conformation bias in others.

(And yes, that totally includes me, too. But my confirmation biases are clearly better than yours. D’uh.)

Elsewhere, Alex Soojung-Kim Pang digs further into the report itself:

… as Sparrow points out, her experiment focuses on transactive memory, not the Proustian, Rick remembering the train as Elsa’s letter slips from his fingers, feeling of holding your first child for the first time, kind of memory: they were tested on trivia questions and sets of factual statements. I’m reminded of Geoff Nunberg’s point that while arguments about the future of literacy and writing talk as if all we read is Tolstoy and Aristotle, the vast majority of printed works have no obvious literary merit. We haven’t lamented the death of the automobile parts catalog or technical documentation, and we should think a little more deeply about memory before jumping to conclusions from this study.

The real question is not whether offloading memory to other people or to things makes us stupid; humans do that all the time, and it shouldn’t be surprising that we do it with computers. The issues, I think, are 1) whether we do this consciously, as a matter of choice rather than as an accident; and 2) what we seek to gain by doing so.

[…]

This magnetic pull of information toward functionality isn’t just confined to phone numbers. I never tried to remember the exact addresses of most businesses, nor did it seem worthwhile to put them in my address book; but now that I can map the location of a business in my iPhone’s map application, and get directions to it, I’m much more likely to put that information in my address book. The iPhone’s functionality has changed the value of this piece of information: because I can map it, it’s worth having in a way it was not in the past.

He also links to Edward Tenner’s two cents at The Atlantic:

I totally agree with James Gleick’s dissent from some cultural conservatives’ worries about the cheapening of knowledge and loss of serendipity from digitization of public domain works. To the contrary, I have found electronic projects have given me many new ideas. The cloud has enhanced, not reduced my respect for the printed originals […]

Technology is indeed our friend, but it can become a dangerous flatterer, creating an illusion of control. Professors and librarians have been disappointed by the actual search skills even of elite college students, as I discussed here. We need quite a bit in our wetware memory to help us decide what information is best to retrieve. I’ve called this the search conundrum.

The issue isn’t whether most information belongs online rather than in the head. We were storing externally even before Gutenberg. It’s whether we’re offloading the memory that we need to process the other memory we need.

And here’s some more analysis and forward linking from Mind Hacks, complete with a new term for my lexicon, transactive memory:

If you want a good write-up of the study you couldn’t do better than checking out the post on Not Exactly Rocket Science which captures the dry undies fact that although the online availability of the information reduced memory for content, it improved memory for its location.

Conversely, when participants knew that the information was not available online, memory for content improved. In other words, the brain is adjusting memory to make information retrieval more efficient depending on the context.

Memory management in general is known as metamemory and the storage of pointers to other information sources (usually people) rather than the content itself, is known as transactive memory.

Think of working in a team where the knowledge is shared across members. Effectively, transactive memory is a form of social memory where each individual is adjusting how much they need to personally remember based on knowledge of other people’s expertise.

This new study, by a group of researchers led by the wonderfully named Betsy Sparrow, found that we treat online information in a similar way.

What this does not show is that information technology is somehow ‘damaging’ our memory, as the participants remembered the location of the information much better when they thought it would be digitally available.

I expect we’re all but done with this story now, but I’d be willing to bet it’s no more than six months before we see a similar one. Stay tuned!

[ In case you’re wondering why I’m only linking to the debunk material, by the way, it’s because the sensationalist misreportings are so ubiquitous that you’ve probably seen them already. Feel free to Google them up though… so long as you don’t mind Google FEEDING YOU LIES! Muah-hah-hah HERP DERP O NOEZ ]

Technology as brain peripherals

Via George Dvorsky, a philosophical push-back against that persistent “teh-intarwebz-be-makin-uz-stoopid” riff, as espoused by professional curmudgeon Nick Carr (among others)… and I’m awarding extra points to Professor Andy Clark at the New York Times not just for arguing that technological extension or enhancement of the mind is no different to repair or support of it, but for mentioning the lyrics to an old Pixies tune. Yes, I really am that easily swayed*.

There is no more reason, from the perspective of evolution or learning, to favor the use of a brain-only cognitive strategy than there is to favor the use of canny (but messy, complex, hard-to-understand) combinations of brain, body and world. Brains play a major role, of course. They are the locus of great plasticity and processing power, and will be the key to almost any form of cognitive success. But spare a thought for the many resources whose task-related bursts of activity take place elsewhere, not just in the physical motions of our hands and arms while reasoning, or in the muscles of the dancer or the sports star, but even outside the biological body — in the iPhones, BlackBerrys, laptops and organizers which transform and extend the reach of bare biological processing in so many ways. These blobs of less-celebrated activity may sometimes be best seen, myself and others have argued, as bio-external elements in an extended cognitive process: one that now criss-crosses the conventional boundaries of skin and skull.

One way to see this is to ask yourself how you would categorize the same work were it found to occur “in the head” as part of the neural processing of, say, an alien species. If you’d then have no hesitation in counting the activity as genuine (though non-conscious) cognitive activity, then perhaps it is only some kind of bio-envelope prejudice that stops you counting the same work, when reliably performed outside the head, as a genuine element in your own mental processing?

[…]

Many people I speak to are perfectly happy with the idea that an implanted piece of non-biological equipment, interfaced to the brain by some kind of directly wired connection, would count (assuming all went well) as providing material support for some of their own cognitive processing. Just as we embrace cochlear implants as genuine but non-biological elements in a sensory circuit, so we might embrace “silicon neurons” performing complex operations as elements in some future form of cognitive repair. But when the emphasis shifts from repair to extension, and from implants with wired interfacing to “explants” with wire-free communication, intuitions sometimes shift. That shift, I want to argue, is unjustified. If we can repair a cognitive function by the use of non-biological circuitry, then we can extend and alter cognitive functions that way too. And if a wired interface is acceptable, then, at least in principle, a wire-free interface (such as links your brain to your notepad, BlackBerry or iPhone) must be acceptable too. What counts is the flow and alteration of information, not the medium through which it moves.

Lots of useful ideas in there for anyone working on a new cyborg manifesto, I reckon… and some interesting implications for the standard suite of human rights, once you start counting outboard hardware as part of the mind. (E.g. depriving someone of their handheld device becomes similar to blindfolding or other forms of sensory deprivation.)

[ * Not really. Well, actually, I dunno; you can try and convince me. Y’know, if you like. Whatever. Ooooh, LOLcats! ]

Cortical coprocessors: an outboard OS for the brain

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…