Tag Archives: learning

Kevin Kelly on technological literacy

Via BoingBoing, here’s a New York Times piece by Kevin Kelly, where he discusses what he learned about technology and education while homeschooling his son for a year:

… as technology floods the rest of our lives, one of the chief habits a student needs to acquire is technological literacy — and we made sure it was part of our curriculum. By technological literacy, I mean the latest in a series of proficiencies children should accumulate in school. Students begin with mastering the alphabet and numbers, then transition into critical thinking, logic and absorption of the scientific method. Technological literacy is something different: proficiency with the larger system of our invented world. It is close to an intuitive sense of how you add up, or parse, the manufactured realm. We don’t need expertise with every invention; that is not only impossible, it’s not very useful. Rather, we need to be literate in the complexities of technology in general, as if it were a second nature.

He goes on to add some more specific aphorism-style lessons – koans for a digital world, almost:

  • Before you can master a device, program or invention, it will be superseded; you will always be a beginner. Get good at it.
  • The proper response to a stupid technology is to make a better one, just as the proper response to a stupid idea is not to outlaw it but to replace it with a better idea.
  • Nobody has any idea of what a new invention will really be good for. The crucial question is, what happens when everyone has one?
  • The older the technology, the more likely it will continue to be useful.
  • Find the minimum amount of technology that will maximize your options.

Some very sf-nal thinking in there… no surprise coming from Kelly, but even so, it reiterates something of Walter Russel Mead’s praise of the genre as the source of a useful way of looking at the world.

It’s also pleasing to see Kelly’s focus on trying to instil an appreciation of (and desire for) learning in his son. I’m far from the first person to observe that the UK education system has long favoured the retention of facts over independent analytical and critical thinking as educational goals, and I’ve seen plenty of reports that suggest the US system has a similar problem. Kelly’s aphorisms underline the point: if you make kids memorise facts, their education is obsolete as soon as it’s finished. Learning how to learn is the most important lesson of them all, and the one that seems hardest for schools and universities to deliver.

The multiphrenic world: Stowe Boyd strikes back on “supertasking”

… which is really a neologism for its own sake (a favourite gambit of Boyd’s, as far as I can tell). But let’s not distract from his radical (and lengthy) counterblast to a New York Times piece about “gadget addiction”, which chimes with Nick Carr’s Eeyore-ish handwringing over attention spans, as mentioned t’other day:

The fear mongers will tell us that the web, our wired devices, and remaining connected are bad for us. It will break down the nuclear family, lead us away from the church, and channel our motivations in strange and unsavory ways. They will say it’s like drugs, gambling, and overeating, that it’s destructive and immoral.

But the reality is that we are undergoing a huge societal change, one that is as fundamental as the printing press or harnessing fire. Yes, human cognition will change, just as becoming literate changed us. Yes, our sense of self and our relationships to others will change, just as it did in the Renaissance. Because we are moving into a multiphrenic world — where the self is becoming a network ‘of multiple socially constructed roles shaping and adapting to diverse contexts’ — it is no surprise that we are adapting by becoming multitaskers.

The presence of supertaskers does not mean that some are inherently capable of multitasking and others are not. Like all human cognition, this is going to be a bell-curve of capability.

As always, Boyd is bullish about the upsides; personally, I think there’s a balance to be found between the two viewpoints here, but – doubtless due to my own citizenship of Multiphrenia – I’m bucking the neophobics and leaning a long way toward the positives. And that’s speaking as someone who’s well aware that he’s not a great multitasker…

But while we’re talking about the adaptivity of the human mind, MindHacks would like to point out the hollowness of one of the more popular buzzwords of the subject, namely neuroplasticity [via Technoccult, who point out that Nick Carr uses the term a fair bit]:

It’s currently popular to solemnly declare that a particular experience must be taken seriously because it ‘rewires the brain’ despite the fact that everything we experience ‘rewires the brain’.

It’s like a reporter from a crime scene saying there was ‘movement’ during the incident. We have learnt nothing we didn’t already know.

Neuroplasticity is common in popular culture at this point in time because mentioning the brain makes a claim about human nature seem more scientific, even if it is irrelevant (a tendency called ‘neuroessentialism’).

Clearly this is rubbish and every time you hear anyone, scientist or journalist, refer to neuroplasticity, ask yourself what specifically they are talking about. If they don’t specify or can’t tell you, they are blowing hot air. In fact, if we banned the word, we would be no worse off.

That’s followed by a list of the phenomena that neuroplasticity might properly be referring to, most of which are changes in the physical structure of the brain rather than cognitive changes in the mind itself. Worth taking a look at.

The Grand Unified Theory of Artificial Intelligence

Artificial intelligence research has long harboured two basic (and opposed) approaches – the earlier method of trying to discover the “rules of thought”, and the more modern probabilistic approach to machine learning. Now some smart guy from MIT called Noah Goodman reckons he has reconciled the two approaches to artificial learning in his new model of thought [via SlashDot]:

As a research tool, Goodman has developed a computer programming language called Church — after the great American logician Alonzo Church — that, like the early AI languages, includes rules of inference. But those rules are probabilistic. Told that the cassowary is a bird, a program written in Church might conclude that cassowaries can probably fly. But if the program was then told that cassowaries can weigh almost 200 pounds, it might revise its initial probability estimate, concluding that, actually, cassowaries probably can’t fly.

“With probabilistic reasoning, you get all that structure for free,” Goodman says. A Church program that has never encountered a flightless bird might, initially, set the probability that any bird can fly at 99.99 percent. But as it learns more about cassowaries — and penguins, and caged and broken-winged robins — it revises its probabilities accordingly. Ultimately, the probabilities represent all the conceptual distinctions that early AI researchers would have had to code by hand. But the system learns those distinctions itself, over time — much the way humans learn new concepts and revise old ones.”

It’ll be interesting to watch the transhumanist and Singularitarian responses to this one, even if all they do is debunk Goodman’s approach entirely.

Software that learns to recognise faces and voices like a child

camera-head stencilsA computer scientist at the University of Pennsylvania has decided to mimic the way children learn to recognise faces and voices in order to speed up the artificial learning curve of intelligent systems:

Using novel learning algorithms that combine audio, video, and text streams, Taskar and his research team are teaching computers to recognize faces and voices in videos. Their system recognizes when someone in the video or audio mentions a name, whether he or she is talking about himself or herself, or whether he or she is talking about someone in the third person. It then maps that correspondence between names and faces and names and voices.

“An intelligent system needs to understand more than just visual input, and more than just language input or audio or speech. It needs to integrate everything in order to really make any progress,” Taskar says.

The information Taskar’s team feeds into the system is free training data harvested from the Internet. Attempts to teach computers visual recognition in the pre-Internet age were hampered in large part by a lack of training content. Today, Taskar says, the Internet provides a “massive digitization of knowledge.” People post videos, comments, blogs, music, and critiques about their favorite things and interests.

Hah! And they said YouTube would never do any real good! Taskar’s computer seems destined for a life of increasing frustration with irresolvable plot lines, though, as they’re training it by showing it episodes of Lost:

As Tasker’s team feeds more data about Lost into the computer—such as video clips, scripts, or blogs—the system improves at identifying people in the video. If, for example, a clip contains footage of characters Kate and Anna Lucia, after being taught, the computer will recognize their faces.

“The alogorithm is learning this from what people say, or from screenplays as well,” Taskar adds. “The screenplay doesn’t tell you who is who, but it tells you there’s a scene with [two characters] talking to each other.”

Taskar says the information the research has produced can be helpful in many ways, particularly in searching videos for content. Currently, if a father is searching for a photo of his daughter playing with the family dog in his gigabytes of photos and videos on his hard drive, unless the photo is tagged “daughter playing with dog,” chances are he isn’t going to be able to find it.

Well, that’s your consumer-level pitch, sure, but the system will be too large and ungainly (and expensive) for Joe Average for a long time. Tasker should probably talk to the UK government… that panoply of CCTV cameras keeps growing, and it costs big money to hire people to watch their output. And what could possibly go wrong with putting an automated recognition system in charge of crime prevention? [image by bixentro]

Old dogs and new tricks: web use good for the elderly brain

A silver surferYounger readers (or those with spousal units prone to nagging about excessive time spent in front of a computer) may wish to arm themselves with the news that internet use appears to restore and strengthen brain function, particularly in the elderly. In other words, surfing the web is keeping your brain young and fit. [via The End Of Cyberspace; image by mhofstrand]

For the research, 24 neurologically normal adults, aged 55 to 78, were asked to surf the Internet while hooked up to an MRI machine. Before the study began, half the participants had used the Internet daily, and the other half had little experience with it.

After an initial MRI scan, the participants were instructed to do Internet searches for an hour on each of seven days in the next two weeks. They then returned to the clinic for more brain scans.

“At baseline, those with prior Internet experience showed a much greater extent of brain activation,” Small said.

After at-home practice, however, those who had just been introduced to the Internet were catching up to those who were old hands, the study found.

Of course, this doesn’t take into account the theory that the structure of the web means we only ever get exposed to ideas that we already find agreeable, but I remain unconvinced of that notion, anyway. A brief glance at history shows that people were always perfectly capable of ignoring information that they found unpalatable, long before the internet (or even the printing press) existed…

But while you’re advising grandma to fire up Firefox, be sure to remind her that it’s an aid to learning, not a replacement for it. Recent research shows that we learn much more quickly if we take a chance to answer incorrectly before looking up the correct response… so try guessing before you Google it, in other words.