Tag Archives: evolution

Looking back on Cyborg Month

When Tim Maly invited me to contribute to the 50 Posts About Cyborgs project, I had a nagging suspicion that I’d have a run-in with impostor syndrome… and I was right. The nearly complete run of posts (49 of them linked from the Tumblr above as I type this) contains some of the smartest and most brain-expanding material I’ve read in a long, long time, from some incredibly erudite writers and thinkers. If you have any interest whatsoever in the post-modern human condition in a technology-saturated world, in where we came from as a species and where we’re going, or in what being (post?)human actually means, then there’ll be something there for you to enjoy – so go read.

And many thanks Tim for inviting me to take part; I’m one proud impostor. 🙂

Chuck Darwin, steampunk terraformer to Her Majesty Queen Victoria

Everyone’s been linking this one (though I saw it first at Chez Ken MacLeod), but it’s too good a story not to mention: the rich ecosystem of Ascension Island is not natural, but the result of a collaboration between Charles Darwin, the Botanical Gardens of Kew and the Royal Navy.

Ascension was an arid island, buffeted by dry trade winds from southern Africa. Devoid of trees at the time of Darwin and Hooker’s visits, the little rain that did fall quickly evaporated away.

Egged on by Darwin, in 1847 Hooker advised the Royal Navy to set in motion an elaborate plan. With the help of Kew Gardens – where Hooker’s father was director – shipments of trees were to be sent to Ascension.

The idea was breathtakingly simple. Trees would capture more rain, reduce evaporation and create rich, loamy soils. The “cinder” would become a garden.

So, beginning in 1850 and continuing year after year, ships started to come. Each deposited a motley assortment of plants from botanical gardens in Europe, South Africa and Argentina.

Soon, on the highest peak at 859m (2,817ft), great changes were afoot. By the late 1870s, eucalyptus, Norfolk Island pine, bamboo, and banana had all run riot.

And here’s your science fictional end-of-story conceptual slingshot bit:

In effect, what Darwin, Hooker and the Royal Navy achieved was the world’s first experiment in “terra-forming”. They created a self-sustaining and self-reproducing ecosystem in order to make Ascension Island more habitable.

Wilkinson thinks that the principles that emerge from that experiment could be used to transform future colonies on Mars. In other words, rather than trying to improve an environment by force, the best approach might be to work with life to help it “find its own way”.

Watch closely for Mars-themed short stories over the next twelve months; I’ve got five bucks here that says a lot of them will feature a capital city or main base called Darwin. 🙂

Which came first: the humans or the tools?

It’s very nearly the fiftieth anniversary* of a word well-used here at Futurismic: cyborg. So what better time for an anthropologist/archaeologist to advance his theory that homo sapiens sapiens is in fact the first cyborg species, evolved more in response to the facilitations of its own technology than to the environment it inhabits? [via ScienceNotFiction]. Take it away, Timothy Taylor:

Darwin is one of my heroes, but I believe he was wrong in seeing human evolution as a result of the same processes that account for other evolution in the biological world – especially when it comes to the size of our cranium.

Darwin had to put large cranial size down to sexual selection, arguing that women found brainy men sexy. But biomechanical factors make this untenable. I call this the smart biped paradox: once you are an upright ape, all natural selection pressures should be in favour of retaining a small cranium. That’s because walking upright means having a narrower pelvis, capping babies’ head size, and a shorter digestive tract, making it harder to support big, energy-hungry brains. Clearly our big brains did evolve, but I think Darwin had the wrong mechanism. I believe it was technology. We were never fully biological entities. We are and always have been artificial apes.

[…]

Technology allows us to accumulate biological deficits: we lost our sharp fingernails because we had cutting tools, we lost our heavy jaw musculature thanks to stone tools. These changes reduced our basic aggression, increased manual dexterity and made males and females more similar. Biological deficits continue today. For example, modern human eyesight is on average worse than that of humans 10,000 years ago.

Unlike other animals, we don’t adapt to environments – we adapt environments to us. We just passed a point where more people on the planet live in cities than not. We are extended through our technology. We now know that Neanderthals were symbolic thinkers, probably made art, had exquisite tools and bigger brains. Does that mean they were smarter?

Evidence shows that over the last 30,000 years there has been an overall decrease in brain size and the trend seems to be continuing. That’s because we can outsource our intelligence. I don’t need to remember as much as a Neanderthal because I have a computer. I don’t need such a dangerous and expensive-to-maintain biology any more. I would argue that humans are going to continue to get less biologically intelligent.

Interesting… and could be taken as a vindication for the hand-wringing of Nick Carr et al over how teh intarwubz be makin uz dumb.

But change is neither good or bad; it just is. Should we lament this outsourcing of our intelligence (I’d prefer the word outboarding, myself, but it’s not so trendy and probably makes people think of motorboats)? Is biological intelligence necessarily more desirable (or even “right” or “good”) than our cybernetic symbiosis? Taylor, thankfully, is not advocating a return to hairshirt primitivism in response to his theory… but I’d bet good money that a whole bunch of folk will do.

[ * There’s a reason I’m aware of this anniversary, and it’s not that I’m obsessed with the etymological history of neologisms**. You’ll find out how and why I possess that nugget of knowledge in the near future. ]

[ ** Actually, I am obsessed with the etymology of neologisms. It’s like butterfly collecting for the altermodern age. ]

Maybe it doesn’t matter that the internet is “making us stupid”

High-profile internet-nay-sayer and technology curmudgeon Nick Carr is cropping up all over the place; these things happen when one has a new book in the offing, y’know*. He’s the guy who claims that Google is making us stupid, that links embedded in HTML sap our ability to read and understand written content (cognitive penalties – a penalty that even the British can do properly, AMIRITE?), and much much more.

The conclusions of Carr’s new book, The Shallows – that, in essence, we’re acquiring a sort of attention deficit problem from being constantly immersed in a sea of bite-sized and interconnected info – have been given a few polite kickings, such as this one from Jonah Lehrer at the New York Times. I’ve not read The Shallows yet, though I plan to; nonetheless, from the quotes and reviews I’ve seen so far, it sounds to me like Carr is mapping the age-related degradation of his own mental faculties onto the world as a whole, and looking for something to blame.

I should add at this point that, although I disagree with a great number of Carr’s ideas, he’s a lucid thinker, and well worth reading. As Bruce Sterling points out, grumpy gadfly pundits like Carr are useful and necessary for a healthy scene, because the urge to prove them wrong drives further innovation, thinking, research and development. He’s at least as important and worth reading as the big-name webvangelists… who all naturally zapped back at Carr’s delinkification post with righteous wrath and snark. The joy of being a mere mortal is, surely, to watch from a safe point of vantage while the gods do battle… 😉

But back to the original point: there’s always a trade-off when we humans acquire new technologies or skills, and what’s missing from commentators decrying these apparent losses is any suggestion that we might be gaining something else – maybe something better – as part of the deal; technological symbiosis is not a zero-sum game, in other words. Peripherally illustrating the point, George Dvorsky points to some research that suggests that too good a memory is actually an evolutionary dead end, at least for foraging mammals:

These guys have created one of the first computer models to take into account a creature’s ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

That makes sense. If you rely on that same set of fruit trees for sustenance, then you are in trouble if these trees die or are stripped by rivals. So the constant search for new sources food pays off, even if it consumes large amounts of resources. “The model forager typically spends half of its traveling time revisiting previous places in an orderly way, an activity which is reminiscent of the travel routes used by real animals, ” say Boyer and Walsh.

They conclude that memory is useful because it allows foragers to find food without the effort of searching. “But excessive memory use prevents the forager from updating its knowledge in rapidly changing environments,” they say.

This reminds me of the central idea behind Peter Watts’ Blindsight – the implication that intelligence itself, which we tend to think of as the inevitable high pinnacle of evolutionary success, is actually a hideously inefficient means to genetic survival, and that as such, we’re something of an evolutionary dead end ourselves. Which reminds me in turn of me mentioning evolutionary “arms races” the other day; perhaps, instead of being in an arms race against our own cultural and technological output as a species, we’re entering a sort of counterbalancing symbiosis with it. Should we start considering technology as a part of ourselves rather than a separate thing? Are we not merely a species of cyborgs, but a cyborg species?

[ * The irony here being that almost all the discussion and promotion of Carr’s work that does him any good occurs… guess where? Hint: not in brick’n’mortar bookstores. ]

The evolution of addiction and the fetishisation of smoking

Not-entirely-unsurprising news from the world of evolutionary psychiatry: human use of psychoactive compounds found in plants and animals is thousands of years old, and evolutionary selection may actually have favoured those of our ancestors who were wired to get a kick from certain substances:

According to Randolph Nesse, evolutionary psychiatrist at the University of Michigan, at some time in humanity’s distant past, individuals whose brains had a heightened response to emotion-linked neurotransmitters (such as dopamine and serotonin) were better suited to survival.

This meant that as the generations passed, heightened response became the norm. […]

Archaeologists have found evidence of kola nut (caffeine), tobacco (nicotine), khat (an amphetamine-like plant), betel nut, and coca, at various sites dating back at least 13,000 years, indicating that humans have, in fact, been drug users for a very long time. Across the globe, people in non-Western cultures are very familiar with these and other mind-altering substances.

“It’s widely believed that human drug use is a new and pathological phenomenon,” says Roger Sullivan, an anthropologist at California State University at Sacramento. “But psychoactive plant toxins were a mundane occurrence in the environments of hominid evolution, and our ancestors may have been exploiting plant drugs for very long periods of time.”

Sullivan and Edward Hagen of Humbolt University in Berlin believe that compulsively seeking these items in the past might have been adaptive during times when nutrients were hard to find.

Human beings: getting baked to deal with hard times since 11,000 BC. Goes some way to explaining why drug legislation – a very very recent phenomenon indeed – has done so little to stop folk wanting to get loaded… and promises a whole new generation of slogans from psychoactive evangelists.

Speaking of legislation, control and addictive substances, here’s a research project of staggering pointlessness: how many videos of people smoking cigarettes in a fetishistic context are easily viewable by teenagers on YouTube?

“The high frequency of smoking fetish videos concerns me,” says Hye-Jin Paek,  associate professor of advertising, public relations, and retailing.

(With that sort of background, one assumes she’s eminently qualified to know how well associative imagery can push psychological buttons… )

Paek conducted the study of “smoking fetish” videos—videos that combine smoking and sexuality. “The fact that we can see the videos and analyze their content means that teenagers can see them too.

[…]

The majority of smoking fetish videos studied explicitly portrayed smoking behaviors, such as lighting up, inhaling, exhaling, and holding the tobacco product. More than half were rated PG-13 or R.

More than 21 percent of the videos contained at least one of the five fetish elements defined in the paper, including gloves, high heels, boots, stockings, and leather or latex clothes.

More than a fifth? O NOES! Well then, we’d better censor all that stuff pretty sharpish, hadn’t we – after all, wrapping up a behaviour one wants to discourage in veiled mystique, puritanical panics and age restrictions has always worked so well before… if we airbrush out everything we don’t like in the world, eventually everyone will be just as self-satisfied as we are!

[ Pre-emptive: I’m not suggesting that teenagers or anyone else smoking cigarettes is a “good” thing. What I’m suggesting is that worrying about videos of people smoking on YouTube as a strong cause of such is laughably foolish. ]