Comfortable in the world: ereaders vs. tablets

Paul Raven @ 17-01-2011

Tom Armitage at Berg compares the seductive gloss of the multipurpose iPad with the more homely functionality of the Kindle; an interesting (and user-centric) argument against technological convergence?

The iPad bursts into life, its backlight on, the blinking “slide to unlock” label hinting at the direction of the motion it wants you to make. That rich, vibrant screen craves attention.

The Kindle blinks – as if it’s remembering where it was – and then displays a screen that’s usually composed of text. The content of the screen changes, but the quality of it doesn’t. There’s no sudden change in brightness or contrast, no backlight. If you hadn’t witnessed the change, you might not think there was anything to pay attention to there.

[…]

Attention-seeking is something we often do when we’re uncomfortable, though – when we need to remind the world we’re still there. And the strongest feeling I get from my recently-acquired Kindle is that it’s comfortable in the world.

That matte, paper-like e-ink screen feels familiar, calm – as opposed to the glowing screens of so many devices that have no natural equivalents. The iPad seems natural enough when it’s off – it has a pleasant glass and metal aesthetic. But hit that home button and that glow reveals its alien insides.

Perhaps the Kindle’s comfort is down to its single-use nature. After all, it knows it already has your attention – when you come to it, you pick it up with the act of reading already in mind.

Provocative stuff… but in the interests of journalistic balance (yeah, right), here’s Jonah Lehrer anguishing over the observation that ereaders may be too easy to read:

I worry that this same impulse – making content easier and easier to see – could actually backfire with books. We will trade away understanding for perception. The words will shimmer on the screen, but the sentences will be quickly forgotten. Let me explain. Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading. It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts. One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading. The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning.

[…]

But the ventral route is not the only way to read. The second reading pathway – it’s known as the dorsal stream – is turned on whenever we’re forced to pay conscious attention to a sentence, perhaps because of an obscure word, or an awkward subclause, or bad handwriting.  (In his experiments, Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts. We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.

This suggests that the act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.

Someone email Nick Carr; I think we’ve found his next padawan. 😉

Be Sociable, Share!