Tag Archives: neuroscience

Not actually “mental time travel” at all, is it?

But it makes for an attention-grabbing skiffy-tastic headline, AMIRITEZ? The actual story here is rather less OMFG: University of Pennsylvania have obtained the first neurobiological evidence in support of the theory of episodic memory.

“Theories of episodic memory suggest that when I remember an event, I retrieve its earlier context and make it part of my present context,” Kahana said.  “When I remember my grandmother, for example, I pull back all sorts of associations of a different time and place in my life; I’m also remembering living in Detroit and her Hungarian cooking. It’s like mental time travel. I jump back in time to the past, but I’m still grounded in the present.”

Jumping back in time to perceptions of the past while still grounded in the present? Strikes me that rewatching old home movies is at least as good a metaphor as time travel, but I’ll grant you that a lot less people would have reported it if it were pitched that way.

Neuroscience is still a fairly new scientific frontier, and while the last decade has seen the arrival of amazing new tools (and enhancements of existing ones), I believe it’s fair to say that these methods are still pretty crude, and the interpretations of results somewhat speculative. But even so, it’s interesting to see these early phases of our attempts to measure something as inherently intangible as the mind:

The memory experiment consisted of patients memorizing lists of 15 unrelated words. After seeing a list of the words in sequence, the subjects were distracted by doing simple arithmetic problems. They were then asked to recall as many words as they could in any order. Their implanted electrodes measured their brain activity at each step, and each subject read and recalled dozens of lists to ensure reliable data.

“By examining the patterns of brain activity recorded from the implanted electrodes,” Manning said, “we can measure when the brain’s activity is similar to a previously recorded pattern. When a patient recalls a word, their brain activity is similar to when they studied the same word.   In addition, the patterns at recall contained traces of other words that were studied prior to the recalled word.”

“What seems to be happening is that when patients recall a word, they bring back not only the thoughts associated with the word itself but also remnants of thoughts associated with other words they studied nearby in time,” he said.

The findings provide a brain-based explanation of a memory phenomenon that people experience every day.

“This is why two friends you met at different points in your life can become linked in your memory,” Kahana said. “Along your autobiographical timeline, contextual associations will exist at every time scale, from experiences that take place over the course of years to experiences that take place over the course of minutes, like studying words on a list.”

Proustian neuroscience

In defiance of the title, I’ll keep this brief. (Yeah, I know, I know; some days I even crack myself up.)

A bit of advice I’ve heard a lot with relation to creative writing – moreso with poetry than fiction, but far from exclusively so – is the deployment of “the telling detail”to create versimilitude. You know the way a writer drops one or two close detailed observations into a scene, and they somehow make it all the more real, easy to visualise? (Like the Mastercard sticker of the shard of glass that pins someone to the back wall of a shop, f’rinstance, which I read in a story a few days back and just can’t get out of my head.)

Well, it turns out that may be tapping into a way that our brains store interrelated information. Like the way sometimes you forget a major facet of some event you experienced – say, the important speech given by someone at a conference – but you can remember some irrelevant little detail, like the way their blouse clashed with their Powerpoint slides? There’s a neuroscientific mechanism for that. (Maybe.) [via BigThink]

In response to external stimuli, dendritic spines in the cerebral cortex undergo structural remodeling, getting larger in response to repeated activity within the brain. This remodeling is thought to underlie learning and memory.

The MIT researchers found that a memory of a seemingly irrelevant detail — the kind of detail that would normally be relegated to a short-term memory — may accompany a long-term memory if two synapses on a single dendritic arbor are stimulated within an hour and a half of each other.

“A synapse that received a weak stimulation, the kind that would normally accompany a short-term memory, will express a correlate of a long-term memory if two synapses on a single dendritic branch were involved in a similar time frame,” Govindarajan said.

This occurs because the weakly stimulated synapse can steal or hitchhike on a set of proteins synthesised at or near the strongly stimulated synapse. These proteins are necessary for the enlargement of a dendritic spine that allows the establishment of a long-term memory.

“Not all irrelevant information is recalled, because some of it did not stimulate the synapses of the dendritic branch that happens to contain the strongly stimulated synapse,” Israely said.

A real neural network

And today’s award for Endearingly Punning Post Headline of the Day goes to my good buddy m1k3y, who has graced grinding.be with a piece titled “Scientists train mouse nerves to grow through series of tubes“. The source for it is this Science News post, which explains how some clever folk have managed to encourage mouse neurons to grow their way along microscopic tubes of semiconductor material, making a crude self-assembling network. But don’t panic: there’s been no firing up of cyber-rodent self-awareness. Yet.

When the team seeded areas outside the tubes with mouse nerve cells the cells went exploring, sending their threadlike projections into the tubes and even following the curves of helical tunnels, the researchers report in an upcoming ACS Nano.

“They seem to like the tubes,” says biomedical engineer Justin Williams, who led the research. The approach offers a way to create elaborate networks with precise geometries, says Williams. “Neurons left to their own devices will kind of glom on to one another or connect randomly to other cells, neither of which is a good model for how neurons work.”

At this stage, the researchers have established that nerve cells are game for exploring the tiny tubes, which seem to be biologically friendly, and that the cell extensions will follow the network to link up physically. But it isn’t clear if the nerves are talking to each other, sending signals the way they do in the body. Future work aims to get voltage sensors and other devices into the tubes so researchers can eavesdrop on the cells. The confining space of the little tunnels should be a good environment for listening in, perhaps allowing researchers to study how nerve cells respond to potential drugs or to compare the behavior of healthy neurons with malfunctioning ones such as those found in people with multiple sclerosis or Parkinson’s.

No radical melding of meat and machine, then, but I suppose the coexistence of living cells and semiconductors has to be a step in that direction…

Comfortable in the world: ereaders vs. tablets

Tom Armitage at Berg compares the seductive gloss of the multipurpose iPad with the more homely functionality of the Kindle; an interesting (and user-centric) argument against technological convergence?

The iPad bursts into life, its backlight on, the blinking “slide to unlock” label hinting at the direction of the motion it wants you to make. That rich, vibrant screen craves attention.

The Kindle blinks – as if it’s remembering where it was – and then displays a screen that’s usually composed of text. The content of the screen changes, but the quality of it doesn’t. There’s no sudden change in brightness or contrast, no backlight. If you hadn’t witnessed the change, you might not think there was anything to pay attention to there.

[…]

Attention-seeking is something we often do when we’re uncomfortable, though – when we need to remind the world we’re still there. And the strongest feeling I get from my recently-acquired Kindle is that it’s comfortable in the world.

That matte, paper-like e-ink screen feels familiar, calm – as opposed to the glowing screens of so many devices that have no natural equivalents. The iPad seems natural enough when it’s off – it has a pleasant glass and metal aesthetic. But hit that home button and that glow reveals its alien insides.

Perhaps the Kindle’s comfort is down to its single-use nature. After all, it knows it already has your attention – when you come to it, you pick it up with the act of reading already in mind.

Provocative stuff… but in the interests of journalistic balance (yeah, right), here’s Jonah Lehrer anguishing over the observation that ereaders may be too easy to read:

I worry that this same impulse – making content easier and easier to see – could actually backfire with books. We will trade away understanding for perception. The words will shimmer on the screen, but the sentences will be quickly forgotten. Let me explain. Stanislas Dehaene, a neuroscientist at the College de France in Paris, has helped illuminate the neural anatomy of reading. It turns out that the literate brain contains two distinct pathways for making sense of words, which are activated in different contexts. One pathway is known as the ventral route, and it’s direct and efficient, accounting for the vast majority of our reading. The process goes like this: We see a group of letters, convert those letters into a word, and then directly grasp the word’s semantic meaning.

[…]

But the ventral route is not the only way to read. The second reading pathway – it’s known as the dorsal stream – is turned on whenever we’re forced to pay conscious attention to a sentence, perhaps because of an obscure word, or an awkward subclause, or bad handwriting.  (In his experiments, Dehaene activates this pathway in a variety of ways, such as rotating the letters or filling the prose with errant punctuation.) Although scientists had previously assumed that the dorsal route ceased to be active once we became literate, Deheane’s research demonstrates that even fluent adults are still forced to occasionally make sense of texts. We’re suddenly conscious of the words on the page; the automatic act has lost its automaticity.

This suggests that the act of reading observes a gradient of awareness. Familiar sentences printed in Helvetica and rendered on lucid e-ink screens are read quickly and effortlessly. Meanwhile, unusual sentences with complex clauses and smudged ink tend to require more conscious effort, which leads to more activation in the dorsal pathway. All the extra work – the slight cognitive frisson of having to decipher the words – wakes us up.

Someone email Nick Carr; I think we’ve found his next padawan. 😉

The real cognitive dissonance

“You keep using that phrase; I do not think it means what you think it means.”

I’ll raise my hand to a mea culpa on this one; cognitive dissonance is a concept whose discovery and explication I owe to none other than William Gibson, and I doubt I’m alone in that among the readership of Futurismic.

Thing is, like a lot of complex psychological concepts, the vernacular conception of cogDiss doesn’t quite match up with the original idea. Take it away, Ars Technica:

…within psychology, [cognitive dissonance] describes a somewhat distinct process, where people are forced to reject an item they actually like. Given this bit of awkwardness, people are prone to dealing with it in a fairly simple manner: they conclude that they never really liked the item that much in the first place. This finding, which implies that behavior can drive belief instead of the other way around, has remained controversial, but researchers are now claiming to have identified the neural activity that drives cognitive dissonance.

[…]

As expected, the authors are able to demonstrate cognitive dissonance in action: once an individual has chosen against an item, their ratings of it plunge. This effect was much, much smaller when a computer made a choice for an individual, although the later personal choice offered these subjects restored a bit of its impact. So, the researchers have confirmed both the previous work on cognitive dissonance and that of its critics: some fraction of the effect seems to be driven by people actually having stronger preferences than they state, but not all of it.

This is – like most neuroscience at this point – simply the first step on a long road of discovery, and things will doubtless turn out to be yet more complex. But in case you’re wondering why this research matters…

… the study pretty clearly shows that behavior isn’t driven simply by what we believe; our actions can feed back and alter our beliefs. Which, really, shouldn’t have surprised anyone, given the degree of post-hoc rationalization that most people engage in. However, as the authors note, this fact seemed to have escaped those who developed the economic systems that assume that people are rational actors.

I believe the word is “zing”.