Mo’ memoryhole backlash

Paul Raven @ 18-07-2011

That last week saw a whole bunch of Luddite handwringing over a science paper about the internet and its effect on memory is no surprise; what’s surprising (in a gratifying kind of way) is how quickly the counter-responses have appeared… even, in some cases, in the same venues where the original misinterpretations rolled out. This is some sort of progress, surely?

Well, maybe not, and those who don’t want to hear the truth will always find a way to ignore it, but even so. Here’s The Guardian‘s Martin Robbins taking the Daily Fail – and, to a lesser degree, another writer at The Guardian – to task for completely inverting what the report’s author actually said:

Professor Sparrow: “What we do know is that people are becoming more and more intelligent … it’s not the case that people are becoming dumber.”

I don’t get it, Daily Mail Reporter, why would you say that a study claims Google is making us stupid, when the scientist is saying the exact flaming opposite? Did you even watch the video before you embedded it in your article? Or read the study? Oh never mind.

Robbins shouldn’t feel too smug, though, because after pointing out the egregious editorialising of others, he swan-dives straight into the pit of ARGH GOOGLE THOUGH SERIOUSLY THEY’RE EVERYWHERE AND IF WE DON’T REGULATE THEM WE’LL ALL BE SPEAKING IN PERL AND SPENDING GOOGLEDOLLARS NEXT YEAR OMFG YUGUIZE. I guess we all have our confirmation biases, even those of us who know and recognise conformation bias in others.

(And yes, that totally includes me, too. But my confirmation biases are clearly better than yours. D’uh.)

Elsewhere, Alex Soojung-Kim Pang digs further into the report itself:

… as Sparrow points out, her experiment focuses on transactive memory, not the Proustian, Rick remembering the train as Elsa’s letter slips from his fingers, feeling of holding your first child for the first time, kind of memory: they were tested on trivia questions and sets of factual statements. I’m reminded of Geoff Nunberg’s point that while arguments about the future of literacy and writing talk as if all we read is Tolstoy and Aristotle, the vast majority of printed works have no obvious literary merit. We haven’t lamented the death of the automobile parts catalog or technical documentation, and we should think a little more deeply about memory before jumping to conclusions from this study.

The real question is not whether offloading memory to other people or to things makes us stupid; humans do that all the time, and it shouldn’t be surprising that we do it with computers. The issues, I think, are 1) whether we do this consciously, as a matter of choice rather than as an accident; and 2) what we seek to gain by doing so.

[…]

This magnetic pull of information toward functionality isn’t just confined to phone numbers. I never tried to remember the exact addresses of most businesses, nor did it seem worthwhile to put them in my address book; but now that I can map the location of a business in my iPhone’s map application, and get directions to it, I’m much more likely to put that information in my address book. The iPhone’s functionality has changed the value of this piece of information: because I can map it, it’s worth having in a way it was not in the past.

He also links to Edward Tenner’s two cents at The Atlantic:

I totally agree with James Gleick’s dissent from some cultural conservatives’ worries about the cheapening of knowledge and loss of serendipity from digitization of public domain works. To the contrary, I have found electronic projects have given me many new ideas. The cloud has enhanced, not reduced my respect for the printed originals […]

Technology is indeed our friend, but it can become a dangerous flatterer, creating an illusion of control. Professors and librarians have been disappointed by the actual search skills even of elite college students, as I discussed here. We need quite a bit in our wetware memory to help us decide what information is best to retrieve. I’ve called this the search conundrum.

The issue isn’t whether most information belongs online rather than in the head. We were storing externally even before Gutenberg. It’s whether we’re offloading the memory that we need to process the other memory we need.

And here’s some more analysis and forward linking from Mind Hacks, complete with a new term for my lexicon, transactive memory:

If you want a good write-up of the study you couldn’t do better than checking out the post on Not Exactly Rocket Science which captures the dry undies fact that although the online availability of the information reduced memory for content, it improved memory for its location.

Conversely, when participants knew that the information was not available online, memory for content improved. In other words, the brain is adjusting memory to make information retrieval more efficient depending on the context.

Memory management in general is known as metamemory and the storage of pointers to other information sources (usually people) rather than the content itself, is known as transactive memory.

Think of working in a team where the knowledge is shared across members. Effectively, transactive memory is a form of social memory where each individual is adjusting how much they need to personally remember based on knowledge of other people’s expertise.

This new study, by a group of researchers led by the wonderfully named Betsy Sparrow, found that we treat online information in a similar way.

What this does not show is that information technology is somehow ‘damaging’ our memory, as the participants remembered the location of the information much better when they thought it would be digitally available.

I expect we’re all but done with this story now, but I’d be willing to bet it’s no more than six months before we see a similar one. Stay tuned!

[ In case you’re wondering why I’m only linking to the debunk material, by the way, it’s because the sensationalist misreportings are so ubiquitous that you’ve probably seen them already. Feel free to Google them up though… so long as you don’t mind Google FEEDING YOU LIES! Muah-hah-hah HERP DERP O NOEZ ]


Drowning in data

Paul Raven @ 01-03-2010

Maybe we’ll have flooded our culture-lungs with angry YouTube comments and pharmaceutical spamblogs before the rising sea-levels get a chance to touch our toes… [via MetaFilter]

According to one estimate, mankind created 150 exabytes (billion gigabytes) of data in 2005. This year, it will create 1,200 exabytes. Merely keeping up with this flood, and storing the bits that might be useful, is difficult enough. Analysing it, to spot patterns and extract useful information, is harder still.

Actually, I don’t see this deluge of data as a bad thing, but I’m very interested in how we’re going to store, manage and curate it.


Here today, gone tomorrow: why the next decade’s web won’t feel familiar

Paul Raven @ 07-10-2009

mosaic of Web2.0 logosPeople seem to be waking up to the impermanence of the web of late. TechDirt points us to a mainstream journalism article at the Globe & Mail, which springboards from the imminent nuking of GeoCities to worrying what will happen to all of your pictures uploaded to Facebook when it eventually (and inevitably) goes the same way. [image by jonas_therkildson]

Lately, there’s been so much discussion about the permanence of information – especially the embarrassing kind – that we have overlooked the fact that it can also disappear. At a time when we’re throwing all kinds of data and memories onto free websites, it’s a blunt reminder that the future can bring unwelcome surprises.

Ten years ago, you could have called GeoCities the garish, beating heart of the Web. It was one of the first sites that threw its doors open to users and invited them to populate its pages according to their own creativity. At a time when the Web was still daunting, it encouraged laypeople to set up their own homepages free of charge.

Kinda like the forerunner of MySpace, then, albeit (somewhat ironically) easier on the eyes and ears… and MySpace’s days are certainly (and mercifully) numbered, if the traffic figures are to be believed. But I digress…

And now, it’s curtains. GeoCities won’t disappear entirely. The Internet Archive – a non-profit foundation based in San Francisco dedicated to backing up the Web for posterity’s sake – is trying to salvage as much as it can before the deadline hits. At least one other independent group is trying to do the same. But this complicates things, because it puts GeoCities users’ data into the hands of an unaccountable third party.

Money-losing websites aren’t exactly novelties. Smaller sites flicker in and out of existence like those bugs that only have 18 hours to mate before they die. But it’s disconcerting to see a big site – one that, long ago, was one of the most popular on the Web – not just fade into obscurity, but come to its end game.

It bring to light some truths about data that are easily overlooked. Websites are like buildings: you can’t just abandon them indefinitely and expect them to keep working. For one thing, that electronic storage isn’t free. Storing files requires media that degrade and computers that fail and power that needs paying for.

The obvious answer here is to make sure you have local backups of anything stored “in the cloud” that you couldn’t bear to lose… but it’s only obvious to those with some degree of computer savvy, and (based on personal experience) everyone else is insufficiently bothered to worry about it ahead of time, no matter how patiently you try to explain the situation. If nothing else, there’ll always be good money for people who can write custom API scraping tools for defunct social networks… that business model will be the new equivalent to the photography studios places who now make their income by scanning and retouching old snapshots from the pre-digital era.

But other changes in the way we use the web are very much afoot, as pointed out by Clive Thompson at Wired. For the last decade, classic search has been the dominant internet tool, propelling Google to the top of the pyramid. But this is the age of Twitter, the temporal gateway into the “real-time web”; maybe the old surfing metaphor will finally make more sense when we’re all riding the Zeitgeist of trending topics:

For more than 10 years, Google has organized the Web by figuring out who has authority. The company measures which sites have the most links pointing to them—crucial votes of confidence—and checks to see whether a site grew to prominence slowly and organically, which tends to be a marker of quality. If a site amasses a zillion links overnight, it’s almost certainly spam.

But the real-time Web behaves in the opposite fashion. It’s all about “trending topics”—zOMG a plane crash!—which by their very nature generate a massive number of links and postings within minutes. And a search engine can’t spend days deciding what is the most crucial site or posting; people want to know immediately.

[…]

“It’s exactly what your friends are going to be talking about when you get to the bar tonight,” OneRiot executive Tobias Peggs says. “That’s what we’re finding.” Google settles arguments; real-time search starts them.

Well, at least we’re not going to be short of things to argue about. If that ever happened, the web would probably close down due to lack of interest… 😉


Ultracapacitors: the game-changer for renewable energy sources?

Paul Raven @ 21-09-2009

Sf author Karl Schroeder points us to a development that may redraw the map for renewable energy use. EEStor, long suspected by some to be the sort of vapourware company that spends a few years making big promises before dissolving in a puff of evaporating venture capital, are believed to have applied for certification of their ultracapacitor technology.

There’s a Wikipedia page on ultracapacitors, which have existed for some time in smaller form, but here’s Schroeder’s summary:

… the ultimate in electricity-storage technology:  a device capable of running your car for hundreds of miles on one charge, and of recharging in under five minutes.  A device that is not a battery, and hence never wears out.  A technology that would make intermittent power generation sources such as windmills directly competitive with baseload generation sources such as coal.

Sounds great, doesn’t it? As pointed out by Schroeder, there’s a great deal of justifiable skepticism around the technology in general and the EEStor news in particular – snake-oil is still a thriving business in the information age, after all. But signs suggest we’ll find out the truth behind the speculation pretty soon… and if we dare to hope that this is the real thing, perhaps we’re about to see what Schroeder calls “a truly disruptive change […] nothing less than the first nail in the coffin of the fossil fuel age.”


Moore’s Law gets a new lease of life

Paul Raven @ 22-02-2009

digital camera CCD chipGood news for Kurzweilian Singularitarians and flop-junkies – Moore’s Law has been looking increasingly likely to derail as we approach the lowest practical limit for semiconductor miniaturization, but newly announced research means there’s life in the old dog yet:

Two US groups have announced transistors almost 1000 times smaller than those in use today, and a [nano-scale magnet-based] version of flash memory that could store all the books in the US Library of Congress in a square 4 inches (10 cm) across.

[…]

Using 3-nanometre magnets, an array could store 10 terabits (roughly 270 standard DVDs) per square inch, says Russell, who is now working to perfect magnets small enough to cram 100 terabits into a square inch.

“Currently, industry is working at half a terabit [per square inch],” he says. “They wanted to be at 10 terabits in a few years’ time – we have leapfrogged that target.”

If this were Engadget, we could squee about how we’ll have laptops the size of wristwatches by the end of the decade, but that would be to miss an important point. The ever-falling cost and size of memory and processing power will certainly mean more gadgets, but those gadgets will bring social changes along with them – as Charlie Stross pointed out a while ago, if you can read and write data at the atomic scale then physical storage capacity becomes a complete non-issue, allowing you to record everything – literally everything. [image by Fox O’Rian]

When you can record everything, how do you go about managing and using what you’ve recorded?


Next Page »