Tag Archives: education

Internet memory holes and filter bubbles O NOEZ!!1

Ah, here we go again – another study that totes proves that the intermawubz be makin’ us dumb. Perfect timing for career curmudgeon Nick Carr, whose new book The Shallows – which is lurking in my To Be Read pile as we speak – continues his earnest handwringing riff over our inevitable tech-driven descent into Morlockhood

Human beings, of course, have always had external, or “transactive,” information stores to supplement their biological memory. These stores can reside in the brains of other people we know (if your friend John is an expert on sports, then you know you can use John’s knowledge of sports facts to supplement your own memory) or in storage or media technologies such as maps and books and microfilm. But we’ve never had an “external memory” so capacious, so available and so easily searched as the web. If, as this study suggests, the way we form (or fail to form) memories is deeply influenced by the mere existence of external information stores, then we may be entering an era in history in which we will store fewer and fewer memories inside our own brains.

Do we actually store fewer and fewer memories, though? Or do we perhaps store the same amount as ever, while having an ever-growing external resource to draw upon, making the amount we can carry in the brainmeat look small by comparison to the total sphere of human knowledge, which is still growing at an arguably exponential rate? Or, to use web-native vernacular: citation needed. (If you can’t remember where you saw your supporting evidence, Nick, feel free to Google it; I won’t hold it against you.)

If a fact stored externally were the same as a memory of that fact stored in our mind, then the loss of internal memory wouldn’t much matter. But external storage and biological memory are not the same thing. When we form, or “consolidate,” a personal memory, we also form associations between that memory and other memories that are unique to ourselves and also indispensable to the development of deep, conceptual knowledge. The associations, moreover, continue to change with time, as we learn more and experience more. As Emerson understood, the essence of personal memory is not the discrete facts or experiences we store in our mind but “the cohesion” which ties all those facts and experiences together. What is the self but the unique pattern of that cohesion?

I submit that we form similar consolidations on a collective basis using the internet as a substrate; hyperlinks, aggregation blogs, tranches of bookmarks both personal and public. I further submit that this makes the internet no different to a dead-tree library except in its speed, depth and utility. This puts the internet at the end of a millennia-long chain of inventions that begun with cave-paintings and written language, all of which doubtless provoked sad eyes and headshaking from those who didn’t have a chance to grow up around them. It’s not the internet Carr fears, it’s change.

I’m usually very keen on Ars Technica‘s reporting on science papers, but there’s a glaringingly bad bit in the second paragraph of their piece on this one:

The potential to find almost any piece of information in seconds is beneficial, but is this ability actually negatively impacting our memory? The authors of a paper that is being released by Science Express describe four experiments testing this. Based on their results, people are recalling information less, and instead can remember where to find the information they have forgotten.

The authors pose one simple example that had me immediately agreeing with their conclusions. Test yourself: how many countries have flags with only one color? Regardless of your answer, was your first thought about actual flags, or was it to consider where you would find that information? Without realizing it (even though I knew the content of the paper), I found myself mentally planning on opening up my Web browser and heading for a search engine.

So a guy who writes articles for publication on the web, and presumably does much of his research using the internet too, is shocked to find his first response to a question he doesn’t immediately know the answer to is “hey, I wonder how I can Google this?” – is that really a surprise? As a former public library employee, my response would probably have been to wonder whereabouts to look in the stacks for the same information; reliance on what we might call “outboard” cultural memory storage is hardly a new thing. And unless you’re in the business of needing to be able to recall trivia without recourse to reference material – like a career pub-quiz participant, perhaps – I remain to be convinced that this is a drastic new failure condition that threatens the downfall of civilisation.

Indeed, a MetaFilter commenter recalls a Richard Feynman anecdote from a year when he was lecturing in Biology that illustrates the point very effectively:

The next paper selected for me was by Adrian and Bronk. They demonstrated that nerve impulses were sharp, single-pulse phenomena. They had done experiments with cats in which they had measured voltages on nerves.

I began to read the paper. It kept talking about extensors and flexors, the gastrocnemius muscle, and so on. This and that muscle were named, but I hadn’t the foggiest idea of where they were located in relation to the nerves or to the cat. So I went to the librarian in the biology section and asked her if she could find me a map of the cat.

“A map of the cat, sir?” she asked, horrified. “You mean a zoological chart!” From then on there were rumors about some dumb biology graduate student who was looking for a “map of the cat.”

When it came time for me to give my talk on the subject, I started off by drawing an outline of the cat and began to name the various muscles.

The other students in the class interrupt me: “We know all that!”

“Oh,” I say, “you do? Then no wonder I can catch up with you so fast after you’ve had four years of biology.” They had wasted all their time memorizing stuff like that, when it could be looked up in fifteen minutes.

Reliance on the memorisation of facts in preference to the more useful skills of knowing how and where to find facts and how to synthesise facts into useful knowledge is a common criticism of the education system here in the UK, and in the US as well. Facts are useless in and of themselves; as such, we’d be better off reassessing the way we teach kids than angsting over the results of the current (broken) system. As Carr points out, the connections we make between facts are the true knowledge, but he discounts those connections as soon as they are made or stored in the cultural sphere rather than the individual mind. That’s a very hierarchical philosophy of knowledge… which might explain Carr’s instinctive flinching from the ad hoc and rhizomatic structure of knowledge as stored on the internet. Don’t panic, Nick; the libraries aren’t going to get rid of the reassuringly pyramidal cataloguing systems any time soon. (Though I wish more of them would allow folksonomy tagging on their catalogue interfaces; best of both approaches, you dig?)

Another of the more persistent Rejectionista riffs is on the rise again, courtesy of Eli Pariser’s new book, The Filter Bubble. You know the one: confirmation bias! The internet makes it way too easy to ignore dissenting viewpoints! OMG terrible and worsening partisan schism in mass culture! (I have to admit that I suspect this riff is a symptom of continued American soulsearching about the increasing polarity of the political sphere; it’s a genuine and increasingly worrying problem, but it ain’t the fault of the intermatubes.)

There are numerous lionisings of and rebuttals to Pariser, if you care to Google them – amazingly enough, and very contrary to Pariser’s own thesis, both types of response appear in the same search for his name… even when searching using my Google account with its heavily customised results!. But I’ll leave you with some chunks from Jesse Walker’s riposte at Reason, which I found via Roderick T Long:

Pariser’s picture is wrong, but a lot of his details are accurate. Facebook’s algorithms do determine which of your friends’ status updates show up in your news feed, and the site goes out of its way to make it difficult to alter or remove those filters. Google does track the things we search for and click on, and it does use that data to shape our subsequent search results. (Some of Pariser’s critics have pointed out that you can turn off Google’s filters fairly easily. This is true, and Pariser should have mentioned it, but in itself it doesn’t invalidate his point. Since his argument is that blinders are being imposed without most people’s knowledge, it doesn’t help much to say that you can avoid them if you know they’re there.)

It is certainly appropriate to look into how these new intermediaries influence our Internet experiences, and there are perfectly legitimate criticisms to be made of their workings. One reason I spend far less time on Facebook than I used to is because I’m tired of the site’s hamfisted efforts to guess what will interest me and to edit my news feed accordingly. Of course, that isn’t a case of personalization gone too far; it’s a case of a company thatwon’t let me personalize as I please.

[…]

Pariser contrasts the age of personalization with the days of the mass audience, when editors could ensure that the stories we needed to know were mixed in with the stories we really wanted to read. Set aside the issue (which Pariser acknowledges) of how good the editors’ judgment actually was; we’ll stipulate that newspapers and newscasters ran reports on worthy but unsexy subjects. Pariser doesn’t do the obvious next step, which is to look into how much people paid attention to those extra stories in the old days and how much they informally personalized their news intake by skipping subjects that didn’t interest them. Nor does he demonstrate what portion of the average Web surfer’s media diet such subjects constitute now. Nor does he look at how many significant stories that didn’t get play in the old days now have a foothold online. If you assume that a centralized authority (i.e., an editor) will do a better job of selecting the day’s most important stories than the messy, bottom-up process that is a social media feed, then you might conclude that those reports will receive less attention now than before. But barring concrete data, that’s all you have to go by: an assumption.

And in that paragraph I think we see the reason that Rejectionistas like Carr and Pariser get so many column-inches in mainstream media outlets in which to handwring: because the editors who give them the space still feel that filtering is something that they should be doing on behalf of their readers, who are surely too stupid to chose the right things.

Given current newsworthy events, I think that’s an attitude which – no matter how well-meaning – needs to be challenged more, not less; if the choice is between applying my own filters or allowing someone whose motivations are at best opaque and at worst Machiavellian and manipulative to do the filtering for me, well… you’ll be able to find me in my filter bubble.

Don’t worry, I’ll see you when you arrive; its walls are largely transparent. Believe it or not, some of us actually prefer it that way. 😉

More Luddite FUD about kids and computers

I was thinking it had been a while since we had one of these. Via FuturePundit, O NOEZ TEH TECHNOLOGIES BE MAKIN KIDS SUCK AT TEH REEDIN:

“Our study shows that the entry of computers into the home has contributed to changing children’s habits in such a manner that their reading does not develop to the same extent as previously. By comparing countries over time we can see a negative correlation between change in reading achievement and change in spare time computer habits which indicates that reading ability falls as leisure use of computers increases”, says Monica Rosén.

OK, I’ll see your study and raise you with this one:

The e-Learning Foundation says that children without access to a computer in the evening are being increasingly disadvantaged in the classroom. Research suggests that 1.2 million teenagers log on to revision pages every week and those using online resources were on average likely to attain a grade higher in exams.

The charity cites BBC research in which more than 100 students used the BBC Bitesize revision materials before their GCSE examination. The children were found to have achieved a grade lift compared to those who did not use the online revision guides. The BBC study says: “This is compared to factors such as teacher influence, which was found to produce no significant difference.”

Which is right? I have no idea. The point is that if you send social scientists looking for evidence to support a pretty nebulous and hard-to-quantify phenomenon, they’ll probably rustle some up. Seek and you shall find… or, I dunno, spend that research money on looking into ways that we can use technology more effectively? How’s about it, huh?

Computers and the internet are here to stay. The way kids learn and interact with the world has changed hugely in last 100 years, and will keep changing, as it always has since the day some smart hunter/gatherer created the first baby sling. If all you’re gonna do is sit on your porch and kvetch about the good old days, you might as well let the kids get some enjoyment out of running around on the lawn.

Wikiversity

What would further education look like if it was run more like Wikipedia? That’s the question asked by a chap called David J Staley at the Educause conference in Anaheim, California last week, who thinks it’s a pretty good idea [via SlashDot]:

First, it wouldn’t have formal admissions, said Mr. Staley, director of the Harvey Goldberg Center for Excellence in Teaching at Ohio State University. People could enter and exit as they wished. It would consist of voluntary and self-organizing associations of teachers and students “not unlike the original idea for the university, in the Middle Ages,” he said. Its curriculum would be intellectually fluid.

[ Those of you who’ve read Zen & The Art Of Motorcycle Maintenance may be reminded of Phaedrus’ University… ]

And instead of tenure, it would have professors “whose longevity would be determined by the community,” Mr. Staley said, and who would move back and forth between the “real world” and the university.

Universities “seem to be becoming more top-down and hierarchical at a time when more and more organizations are looking more like networks,” said Mr. Staley…

Not everyone agrees with Staley, of course:

“… he clearly understands Wikipedia about as well as he understands universities. That is, not very well. Wikipedia is peculiar. Its brilliance is in its peculiarity. It’s also more static, intellectually conservative, and elite-governed than most people believe.”

Valid points, but I think the problem is due to Staley using a specific institution as a placeholder for a more general set of ideas and methods; yes, Wikipedia is flawed (just like any human institution), but its underlying principles are symptomatic of a phase change in the way we look at organisation, which is what I suspect Staley was getting at.

We’ve discussed further education’s increasing unsuitability-for-purpose before, and much of that unsuitability comes from the rigidity of its hierarchical approach to both organisation and the categorisation of knowledge; a more open, flexible and fluid system might not produce the same numbers of people equipped with expensive pieces of vellum, but I suspect it would produce a lot more people with knowledge that was actually useful to them in the chaos of the contemporary economy. That said, until you manage to convince employers to hire people on the basis of their actual skillsets instead of their paper qualifications, you’re going to struggle to convince academia to abandon the business-like model that it currently operates under.

Interestingly, this chimes with a UK-based project I’ve been invited to get involved with, which I will discuss further when it’s more fully developed…

Wired suggests supplementary skill-sets for the 21st Century

Dovetailing rather neatly with Kevin Kelly’s piece on technological literacy last week, Wired has an oddly-formatted but provocative piece that they’ve entitled 7 Essential Skills That You Didn’t Learn In College. Those seven skills are:

All fairly pertinent, and very Futurismic as well… though I’m not sure how “essential” the remix culture aspect is. I’m inclined to think – perhaps uncharitably – that anyone aspiring to be an artist or creator who hasn’t already grasped those basic truths by observing the world around them is never going to get it, no matter how clearly you spell it out for them. Am I being unfair?

It’s a decent enough list, but not exhaustive by any means – what would you add to it? Or, equally, what would you remove?

Kevin Kelly on technological literacy

Via BoingBoing, here’s a New York Times piece by Kevin Kelly, where he discusses what he learned about technology and education while homeschooling his son for a year:

… as technology floods the rest of our lives, one of the chief habits a student needs to acquire is technological literacy — and we made sure it was part of our curriculum. By technological literacy, I mean the latest in a series of proficiencies children should accumulate in school. Students begin with mastering the alphabet and numbers, then transition into critical thinking, logic and absorption of the scientific method. Technological literacy is something different: proficiency with the larger system of our invented world. It is close to an intuitive sense of how you add up, or parse, the manufactured realm. We don’t need expertise with every invention; that is not only impossible, it’s not very useful. Rather, we need to be literate in the complexities of technology in general, as if it were a second nature.

He goes on to add some more specific aphorism-style lessons – koans for a digital world, almost:

  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

Some very sf-nal thinking in there… no surprise coming from Kelly, but even so, it reiterates something of Walter Russel Mead’s praise of the genre as the source of a useful way of looking at the world.

It’s also pleasing to see Kelly’s focus on trying to instil an appreciation of (and desire for) learning in his son. I’m far from the first person to observe that the UK education system has long favoured the retention of facts over independent analytical and critical thinking as educational goals, and I’ve seen plenty of reports that suggest the US system has a similar problem. Kelly’s aphorisms underline the point: if you make kids memorise facts, their education is obsolete as soon as it’s finished. Learning how to learn is the most important lesson of them all, and the one that seems hardest for schools and universities to deliver.