Tag Archives: Luddism

Stupid responses to wicked problems, part [x]

Seems lots of people can see the potential long-term problems with the plans of Foxconn (and doubtless many others) to replace human manufacturing labour with robots. Sadly, that doesn’t preclude them coming up with the most myopic and reactionary response possible:

Despite my love of robots since childhood – as the high point of technology and for the technological challenges they present – we must remain vigilant about how they are helping us. If it turns out they are making our lives worse, I will be first in the luddite line with my sledgehammer.

Yes, Noel, yes! Because it’s the robots that are deciding the course of macroeconomics, isn’t it? Sneaky robots! Thank heavens for your vigilant sledgehammer; I shall sleep easier at night knowing you’re watching for that critical moment when a systemic drift manifests as an observable (if ill-defined) impact on our privileged Western lifestyles, ready and willing to destroy the tools of potential oppression, yet leaving the hands that would wield them unharmed!

Idiot. We cannot detach ourselves from our technologies; we are a cyborg species and always have been. Hairshirt back-to-basics primitivism is as unachievable and naive as Singularitarianism. Robots are tools, just like looms; why destroy a morally neutral tool when you could instead work on the systemic problems which make that tool into a vector of oppression?

Fight the fist, not the gauntlet.

Mo’ memoryhole backlash

That last week saw a whole bunch of Luddite handwringing over a science paper about the internet and its effect on memory is no surprise; what’s surprising (in a gratifying kind of way) is how quickly the counter-responses have appeared… even, in some cases, in the same venues where the original misinterpretations rolled out. This is some sort of progress, surely?

Well, maybe not, and those who don’t want to hear the truth will always find a way to ignore it, but even so. Here’s The Guardian‘s Martin Robbins taking the Daily Fail – and, to a lesser degree, another writer at The Guardian – to task for completely inverting what the report’s author actually said:

Professor Sparrow: “What we do know is that people are becoming more and more intelligent … it’s not the case that people are becoming dumber.”

I don’t get it, Daily Mail Reporter, why would you say that a study claims Google is making us stupid, when the scientist is saying the exact flaming opposite? Did you even watch the video before you embedded it in your article? Or read the study? Oh never mind.

Robbins shouldn’t feel too smug, though, because after pointing out the egregious editorialising of others, he swan-dives straight into the pit of ARGH GOOGLE THOUGH SERIOUSLY THEY’RE EVERYWHERE AND IF WE DON’T REGULATE THEM WE’LL ALL BE SPEAKING IN PERL AND SPENDING GOOGLEDOLLARS NEXT YEAR OMFG YUGUIZE. I guess we all have our confirmation biases, even those of us who know and recognise conformation bias in others.

(And yes, that totally includes me, too. But my confirmation biases are clearly better than yours. D’uh.)

Elsewhere, Alex Soojung-Kim Pang digs further into the report itself:

… as Sparrow points out, her experiment focuses on transactive memory, not the Proustian, Rick remembering the train as Elsa’s letter slips from his fingers, feeling of holding your first child for the first time, kind of memory: they were tested on trivia questions and sets of factual statements. I’m reminded of Geoff Nunberg’s point that while arguments about the future of literacy and writing talk as if all we read is Tolstoy and Aristotle, the vast majority of printed works have no obvious literary merit. We haven’t lamented the death of the automobile parts catalog or technical documentation, and we should think a little more deeply about memory before jumping to conclusions from this study.

The real question is not whether offloading memory to other people or to things makes us stupid; humans do that all the time, and it shouldn’t be surprising that we do it with computers. The issues, I think, are 1) whether we do this consciously, as a matter of choice rather than as an accident; and 2) what we seek to gain by doing so.

[…]

This magnetic pull of information toward functionality isn’t just confined to phone numbers. I never tried to remember the exact addresses of most businesses, nor did it seem worthwhile to put them in my address book; but now that I can map the location of a business in my iPhone’s map application, and get directions to it, I’m much more likely to put that information in my address book. The iPhone’s functionality has changed the value of this piece of information: because I can map it, it’s worth having in a way it was not in the past.

He also links to Edward Tenner’s two cents at The Atlantic:

I totally agree with James Gleick’s dissent from some cultural conservatives’ worries about the cheapening of knowledge and loss of serendipity from digitization of public domain works. To the contrary, I have found electronic projects have given me many new ideas. The cloud has enhanced, not reduced my respect for the printed originals […]

Technology is indeed our friend, but it can become a dangerous flatterer, creating an illusion of control. Professors and librarians have been disappointed by the actual search skills even of elite college students, as I discussed here. We need quite a bit in our wetware memory to help us decide what information is best to retrieve. I’ve called this the search conundrum.

The issue isn’t whether most information belongs online rather than in the head. We were storing externally even before Gutenberg. It’s whether we’re offloading the memory that we need to process the other memory we need.

And here’s some more analysis and forward linking from Mind Hacks, complete with a new term for my lexicon, transactive memory:

If you want a good write-up of the study you couldn’t do better than checking out the post on Not Exactly Rocket Science which captures the dry undies fact that although the online availability of the information reduced memory for content, it improved memory for its location.

Conversely, when participants knew that the information was not available online, memory for content improved. In other words, the brain is adjusting memory to make information retrieval more efficient depending on the context.

Memory management in general is known as metamemory and the storage of pointers to other information sources (usually people) rather than the content itself, is known as transactive memory.

Think of working in a team where the knowledge is shared across members. Effectively, transactive memory is a form of social memory where each individual is adjusting how much they need to personally remember based on knowledge of other people’s expertise.

This new study, by a group of researchers led by the wonderfully named Betsy Sparrow, found that we treat online information in a similar way.

What this does not show is that information technology is somehow ‘damaging’ our memory, as the participants remembered the location of the information much better when they thought it would be digitally available.

I expect we’re all but done with this story now, but I’d be willing to bet it’s no more than six months before we see a similar one. Stay tuned!

[ In case you’re wondering why I’m only linking to the debunk material, by the way, it’s because the sensationalist misreportings are so ubiquitous that you’ve probably seen them already. Feel free to Google them up though… so long as you don’t mind Google FEEDING YOU LIES! Muah-hah-hah HERP DERP O NOEZ ]

Internet memory holes and filter bubbles O NOEZ!!1

Ah, here we go again – another study that totes proves that the intermawubz be makin’ us dumb. Perfect timing for career curmudgeon Nick Carr, whose new book The Shallows – which is lurking in my To Be Read pile as we speak – continues his earnest handwringing riff over our inevitable tech-driven descent into Morlockhood

Human beings, of course, have always had external, or “transactive,” information stores to supplement their biological memory. These stores can reside in the brains of other people we know (if your friend John is an expert on sports, then you know you can use John’s knowledge of sports facts to supplement your own memory) or in storage or media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of external information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.

Do we actually store fewer and fewer memories, though? Or do we perhaps store the same amount as ever, while having an ever-growing external resource to draw upon, making the amount we can carry in the brainmeat look small by comparison to the total sphere of human knowledge, which is still growing at an arguably exponential rate? Or, to use web-native vernacular: citation needed. (If you can’t remember where you saw your supporting evidence, Nick, feel free to Google it; I won’t hold it against you.)

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?

I submit that we form similar consolidations on a collective basis using the internet as a substrate; hyperlinks, aggregation blogs, tranches of bookmarks both personal and public. I further submit that this makes the internet no different to a dead-tree library except in its speed, depth and utility. This puts the internet at the end of a millennia-long chain of inventions that begun with cave-paintings and written language, all of which doubtless provoked sad eyes and headshaking from those who didn’t have a chance to grow up around them. It’s not the internet Carr fears, it’s change.

I’m usually very keen on Ars Technica‘s reporting on science papers, but there’s a glaringingly bad bit in the second paragraph of their piece on this one:

The potential to find almost any piece of information in seconds is beneficial, but is this ability actually negatively impacting our memory? The authors of a paper that is being released by Science Express describe four experiments testing this. Based on their results, people are recalling information less, and instead can remember where to find the information they have forgotten.

The authors pose one simple example that had me immediately agreeing with their conclusions. Test yourself: how many countries have flags with only one color? Regardless of your answer, was your first thought about actual flags, or was it to consider where you would find that information? Without realizing it (even though I knew the content of the paper), I found myself mentally planning on opening up my Web browser and heading for a search engine.

So a guy who writes articles for publication on the web, and presumably does much of his research using the internet too, is shocked to find his first response to a question he doesn’t immediately know the answer to is “hey, I wonder how I can Google this?” – is that really a surprise? As a former public library employee, my response would probably have been to wonder whereabouts to look in the stacks for the same information; reliance on what we might call “outboard” cultural memory storage is hardly a new thing. And unless you’re in the business of needing to be able to recall trivia without recourse to reference material – like a career pub-quiz participant, perhaps – I remain to be convinced that this is a drastic new failure condition that threatens the downfall of civilisation.

Indeed, a MetaFilter commenter recalls a Richard Feynman anecdote from a year when he was lecturing in Biology that illustrates the point very effectively:

The next paper selected for me was by Adrian and Bronk. They demonstrated that nerve impulses were sharp, single-pulse phenomena. They had done experiments with cats in which they had measured voltages on nerves.

I began to read the paper. It kept talking about extensors and flexors, the gastrocnemius muscle, and so on. This and that muscle were named, but I hadn’t the foggiest idea of where they were located in relation to the nerves or to the cat. So I went to the librarian in the biology section and asked her if she could find me a map of the cat.

“A map of the cat, sir?” she asked, horrified. “You mean a zoological chart!” From then on there were rumors about some dumb biology graduate student who was looking for a “map of the cat.”

When it came time for me to give my talk on the subject, I started off by drawing an outline of the cat and began to name the various muscles.

The other students in the class interrupt me: “We know all that!”

“Oh,” I say, “you do? Then no wonder I can catch up with you so fast after you’ve had four years of biology.” They had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes.

Reliance on the memorisation of facts in preference to the more useful skills of knowing how and where to find facts and how to synthesise facts into useful knowledge is a common criticism of the education system here in the UK, and in the US as well. Facts are useless in and of themselves; as such, we’d be better off reassessing the way we teach kids than angsting over the results of the current (broken) system. As Carr points out, the connections we make between facts are the true knowledge, but he discounts those connections as soon as they are made or stored in the cultural sphere rather than the individual mind. That’s a very hierarchical philosophy of knowledge… which might explain Carr’s instinctive flinching from the ad hoc and rhizomatic structure of knowledge as stored on the internet. Don’t panic, Nick; the libraries aren’t going to get rid of the reassuringly pyramidal cataloguing systems any time soon. (Though I wish more of them would allow folksonomy tagging on their catalogue interfaces; best of both approaches, you dig?)

Another of the more persistent Rejectionista riffs is on the rise again, courtesy of Eli Pariser’s new book, The Filter Bubble. You know the one: confirmation bias! The internet makes it way too easy to ignore dissenting viewpoints! OMG terrible and worsening partisan schism in mass culture! (I have to admit that I suspect this riff is a symptom of continued American soulsearching about the increasing polarity of the political sphere; it’s a genuine and increasingly worrying problem, but it ain’t the fault of the intermatubes.)

There are numerous lionisings of and rebuttals to Pariser, if you care to Google them – amazingly enough, and very contrary to Pariser’s own thesis, both types of response appear in the same search for his name… even when searching using my Google account with its heavily customised results!. But I’ll leave you with some chunks from Jesse Walker’s riposte at Reason, which I found via Roderick T Long:

Pariser’s picture is wrong, but a lot of his details are accurate. Facebook’s algorithms do determine which of your friends’ status updates show up in your news feed, and the site goes out of its way to make it difficult to alter or remove those filters. Google does track the things we search for and click on, and it does use that data to shape our subsequent search results. (Some of Pariser’s critics have pointed out that you can turn off Google’s filters fairly easily. This is true, and Pariser should have mentioned it, but in itself it doesn’t invalidate his point. Since his argument is that blinders are being imposed without most people’s knowledge, it doesn’t help much to say that you can avoid them if you know they’re there.)

It is certainly appropriate to look into how these new intermediaries influence our Internet experiences, and there are perfectly legitimate criticisms to be made of their workings. One reason I spend far less time on Facebook than I used to is because I’m tired of the site’s hamfisted efforts to guess what will interest me and to edit my news feed accordingly. Of course, that isn’t a case of personalization gone too far; it’s a case of a company thatwon’t let me personalize as I please.

[…]

Pariser contrasts the age of personalization with the days of the mass audience, when editors could ensure that the stories we needed to know were mixed in with the stories we really wanted to read. Set aside the issue (which Pariser acknowledges) of how good the editors’ judgment actually was; we’ll stipulate that newspapers and newscasters ran reports on worthy but unsexy subjects. Pariser doesn’t do the obvious next step, which is to look into how much people paid attention to those extra stories in the old days and how much they informally personalized their news intake by skipping subjects that didn’t interest them. Nor does he demonstrate what portion of the average Web surfer’s media diet such subjects constitute now. Nor does he look at how many significant stories that didn’t get play in the old days now have a foothold online. If you assume that a centralized authority (i.e., an editor) will do a better job of selecting the day’s most important stories than the messy, bottom-up process that is a social media feed, then you might conclude that those reports will receive less attention now than before. But barring concrete data, that’s all you have to go by: an assumption.

And in that paragraph I think we see the reason that Rejectionistas like Carr and Pariser get so many column-inches in mainstream media outlets in which to handwring: because the editors who give them the space still feel that filtering is something that they should be doing on behalf of their readers, who are surely too stupid to chose the right things.

Given current newsworthy events, I think that’s an attitude which – no matter how well-meaning – needs to be challenged more, not less; if the choice is between applying my own filters or allowing someone whose motivations are at best opaque and at worst Machiavellian and manipulative to do the filtering for me, well… you’ll be able to find me in my filter bubble.

Don’t worry, I’ll see you when you arrive; its walls are largely transparent. Believe it or not, some of us actually prefer it that way. 😉

Nominet issues web-Luddite smackdown report

Against the continual traffic-noise drone of hand-wringing hacks and marginal psychiatric hucksters people banging on about how the intertubes are destroying [ literacy / civilisation / politeness / sanity / discourse / TheChildrenOhGodWon’tSomeoneThinkOfTheChildren ], here’s a counter in the form of a report from the Nominet Trust here in the UK [via New Scientist]. The main findings:

  • There is no neurological evidence that the internet is more effective at “rewiring” our brains than other environmental influences.
  • The internet is a “valuable learning resource and all forms of learning cause changes within the brain”.
  • Social networking sites, in themselves, are not a special source of risk to children, and are generally beneficial as they support existing friendships
  • Playing action video games can improve some visual processing and motor response skills
  • Computer-based activity provides mental stimulation and this can help slow rates of cognitive decline

As the NS piece points out, Nominet aren’t exactly unbiased on this issue, being an organisation that advocates and works toward increasing the availability of internet access to the less advantaged. But the onus is very much on the Rejectionistas to demonstrate proof of these terrible debilitating effects that technology is supposed to be having on us… and it’s telling that they’ve largely failed to find any thus far.

Of course technology changes us, changes the ways we think and work and play – it always has done, in fact, and that’s what’s shaped us as a species. What I take issue with is the notion that change is de facto a bad thing, and to be feared as such.