Whoa; capitalism is like The Matrix, dude

The latest book in the wave of economics-for-the-layman texts, piggybacking on the global sense of “WTF just happened?” in the wake of the subprime collapse and its ripples, is 23 Things They Don’t Tell You About Capitalism from Cambridge economist Doctor Ha-Joon Chang, who apparently manages to play a currently popular theme (“free markets are bad”) with a less-popular counterpoint (“the welfare state should be expanded”) [via TheBigThink].

“It is like The Matrix. There is a reality where things could and should be better,” he said. “In order to wake people up to that alternative reality, you need to show them that it isn’t impossible. I’m not necessarily saying that I have a solution, but we have to recognise that some of the things we accept as inevitable aren’t.”

But while Dr Chang may not have the answer, he is sure of the problem – arguing that free-market capitalism has left the global economy more unstable, and people with less job security and greater feelings of insecurity, than ever before. His conviction that, post-recession, we should be rebuilding our country in a “moral” way – by acknowledging the social consequences of economic choices such as benefit cuts and job losses – will strike a chord with many.

“Another myth that needs to be busted is the idea that we can discuss economics without any moral implications,” he said. “What kind of economy we build changes us, so what we do in terms of monetary policy determines who we are.”

Kudos to any pundit honest enough to admit that they don’t have a silver bullet in the breech. I’m in close agreement with Chang’s thoughts about the morality of economic processes, though I take some issue with his rejection of free markets (which is, to be fair, hardly new to Chang). I’d agree that what are usually described as “free markets” are indeed broken (there’s too much evidence to ignore), but I remain to be convinced that those markets are truly “free” in any way that Adam Smith himself would have recognised. I’m no economics boffin, of course, and as such I’m not going to state with certainty that truly free markets would be the solution to all our economic woes… but I think it’s fair to say that regulation is never going to prevent disasters and abuses in a system wherein certain groups and individuals are given (or simply invent for themselves) ways of avoiding or circumventing such.

Like Chang, I don’t have a solution, but I suspect our best route forward is through the territory of transparency. Another thing that would help would be encouraging economic actors to be less trusting, but how that could be achieved is quite beyond me; the duplicitous and deceitful tactics of lending institutions prey on what appears to be a hardwired psychological blindspot whereby we privilege short-term advantage over long-term consequences. For example, would the global collapse still have happened if all the people who simply couldn’t afford the mortgages they were signed up to had looked rationally at their situation and never taken them on? Which is easier: to prevent institutions flogging dodgy deals, or to prevent people from signing a contract they don’t fully understand?

Easier said than done, of course: the rational actor is possibly the greatest myth of economic theory. But could the rational actor be nurtured? I think that perhaps you wouldn’t need to educate everyone in the intricacies of economic theory to achieve this; simply encouraging a pathological cynicism toward the deal that looks too good to be true might be enough (which recent events seem to have gone some small way toward accomplishing), and in a networked peer-to-peer society, more knowledgeable and trustworthy individuals would develop a reputation for reliable advice on complex financial issues. I’d certainly place more trust in a succession of recommendations and reviews from ordinary people more than in a diploma certificate and an expensive office…

… and it looks like my anarcho-utopianism is showing again*. I have no idea whether it would be possible to rationalise the economic thinking of everyday people (though I suspect that, if it were to occur, it would most likely occur as an emergent phenomenon in small local groups at first, possibly piggybacking on local currency movements and/or cooperative communities)… but I doubt it’s any more impossible than building a system of laws that’s big enough to encapsulate the world economy, yet devoid of the regulatory loopholes and protectionism that tend to push us into these periodic catastrophes.

Shorter version: the grass is so much greener on that side of the fence, but I have no idea how we should climb it.

[ * It’s awkward and frustrating, sometimes, being cynical enough to poke holes in one’s own underlying optimism about people. People call me a pessimist, but that’s not the case: if anything, I’m a pragmatic optimist. And so much for nomenclature. ]

Why isn’t there a gender-neutral pronoun?

Actually, there are dozens of gender-neutral pronouns, and that’s  true even if you limit your search to the science fiction canon. But calls for a gender-neutral pronoun are much older than you might have thought, as the Oxford University Press blog explains, and we still haven’t managed to adopt one [via TheBigThink]:

Such discussions in the 1880s and 90s did nothing to shake up the pronoun paradigm, and nothing came of subsequent proposals for heer, hie, ha, hesh, thir, she (together with shis and shim), himorher, se, heesh, hse, kin, ve, ta, tey, fm, z, ze, shem, se, j/e, jee, ey, ho, po, ae, et, heshe, hann, herm, ala, de, ghach, han, he, mef, ws, and ze [a list with dates and sources for many of these pronouns can be found here].

Flash forward to 1978, when The Times (of London) prints a letter in response to yet another call for a new “unisex” pronoun set, advocating le, lim, ler, and lers. (And another correspondent tersely suggests it.)

Despite this wealth of coinage, there is still no widely-accepted gender-neutral pronoun. In part, that’s because pronoun systems are slow to change, and when change comes, it is typically natural rather than engineered.

For those of us who work with words, of course, there are canonical rulesets to which we are supposed to adhere. But it’s the ruleset of grammar that long forbade the use of the singular they:

… despite the almost universal condemnation of the coordinate he or she by supporters of gender-neutral pronouns, the rule books now opt for he or she and not an invented word to replace the generic he. Students who once were taught that the masculine pronoun must always be used in cases of mixed or doubtful gender are now taught instead to use coordinate forms, not for gender balance or grammatical precision, but simply because that’s the new rule. Those writers who question the rule, who realize that multiple he-or-she’s just don’t make for readable prose, won’t seek out a new gender-neutral pronoun. Instead they’ll recast some sentences as plural, and for the rest they’ll just take their chances with singular they. After all, if you, which is also gender neutral, can serve both for singular and plural, why can’t they do the same? In any case, after more than 100 attempts to coin a gender-neutral pronoun over the course of more than 150 years, thon and its competitors will remain what they always have been, the words that failed.

Regular readers may have noticed that I tend to use the singular they wherever possible – indeed, I’ve been called out on it in the comments here once or twice, so that grammatical rule dies hard. I really can’t remember when I started doing it, either; I’m not sure whether I was taught that way at school (though I doubt it, given the conservatism of my education).

All this, I suppose, makes gender-neutral pronouns a case study in the seemingly universal human urge to create multiple new rules in order to fix a problem that could be obviated by dropping or loosening a single old rule…

Which came first: the humans or the tools?

It’s very nearly the fiftieth anniversary* of a word well-used here at Futurismic: cyborg. So what better time for an anthropologist/archaeologist to advance his theory that homo sapiens sapiens is in fact the first cyborg species, evolved more in response to the facilitations of its own technology than to the environment it inhabits? [via ScienceNotFiction]. Take it away, Timothy Taylor:

Darwin is one of my heroes, but I believe he was wrong in seeing human evolution as a result of the same processes that account for other evolution in the biological world – especially when it comes to the size of our cranium.

Darwin had to put large cranial size down to sexual selection, arguing that women found brainy men sexy. But biomechanical factors make this untenable. I call this the smart biped paradox: once you are an upright ape, all natural selection pressures should be in favour of retaining a small cranium. That’s because walking upright means having a narrower pelvis, capping babies’ head size, and a shorter digestive tract, making it harder to support big, energy-hungry brains. Clearly our big brains did evolve, but I think Darwin had the wrong mechanism. I believe it was technology. We were never fully biological entities. We are and always have been artificial apes.

[…]

Technology allows us to accumulate biological deficits: we lost our sharp fingernails because we had cutting tools, we lost our heavy jaw musculature thanks to stone tools. These changes reduced our basic aggression, increased manual dexterity and made males and females more similar. Biological deficits continue today. For example, modern human eyesight is on average worse than that of humans 10,000 years ago.

Unlike other animals, we don’t adapt to environments – we adapt environments to us. We just passed a point where more people on the planet live in cities than not. We are extended through our technology. We now know that Neanderthals were symbolic thinkers, probably made art, had exquisite tools and bigger brains. Does that mean they were smarter?

Evidence shows that over the last 30,000 years there has been an overall decrease in brain size and the trend seems to be continuing. That’s because we can outsource our intelligence. I don’t need to remember as much as a Neanderthal because I have a computer. I don’t need such a dangerous and expensive-to-maintain biology any more. I would argue that humans are going to continue to get less biologically intelligent.

Interesting… and could be taken as a vindication for the hand-wringing of Nick Carr et al over how teh intarwubz be makin uz dumb.

But change is neither good or bad; it just is. Should we lament this outsourcing of our intelligence (I’d prefer the word outboarding, myself, but it’s not so trendy and probably makes people think of motorboats)? Is biological intelligence necessarily more desirable (or even “right” or “good”) than our cybernetic symbiosis? Taylor, thankfully, is not advocating a return to hairshirt primitivism in response to his theory… but I’d bet good money that a whole bunch of folk will do.

[ * There’s a reason I’m aware of this anniversary, and it’s not that I’m obsessed with the etymological history of neologisms**. You’ll find out how and why I possess that nugget of knowledge in the near future. ]

[ ** Actually, I am obsessed with the etymology of neologisms. It’s like butterfly collecting for the altermodern age. ]

Brazilian farming methods could feed a hungry planet

There’s few things I enjoy more during my daily feed-reader trawl than a headline with two potential meanings… and here’s a classic case from The Big Think: “Brazilian Model Could Feed The World“. Wow – has he/she started a gene-mod crops business with his/her superstar income? Or perhaps he/she is just very very large, and thus could be sliced up and distributed to the world’s most needy?

As you’ve probably guessed from my own headline, it’s nothing at all to do with a monstrous fifty-foot Brazilian catwalk star (which is slightly disappointing for the B-movie fans in the audience, I guess). As the target article at The Economist explains, the model in question is Brazil’s agricultural policies:

Even more striking than the fact of its success has been the manner of it. Brazil has followed more or less the opposite of the agro-pessimists’ prescription. For them, sustainability is the greatest virtue and is best achieved by encouraging small farms and organic practices. They frown on monocultures and chemical fertilisers. They like agricultural research but loathe genetically modified (GM) plants. They think it is more important for food to be sold on local than on international markets. Brazil’s farms are sustainable, too, thanks to abundant land and water. But they are many times the size even of American ones. Farmers buy inputs and sell crops on a scale that makes sense only if there are world markets for them. And they depend critically on new technology. As the briefing explains, Brazil’s progress has been underpinned by the state agricultural-research company and pushed forward by GM crops. Brazil represents a clear alternative to the growing belief that, in farming, small and organic are beautiful.

That alternative commands respect for three reasons. First, it is magnificently productive. It is not too much to talk about a miracle, and one that has been achieved without the huge state subsidies that prop up farmers in Europe and America. Second, the Brazilian way of farming is more likely to do good in the poorest countries of Africa and Asia. Brazil’s climate is tropical, like theirs. Its success was built partly on improving grasses from Africa and cattle from India. Of course there are myriad reasons why its way of farming will not translate easily, notably that its success was achieved at a time when the climate was relatively stable whereas now uncertainty looms. Still, the basic ingredients of Brazil’s success—agricultural research, capital-intensive large farms, openness to trade and to new farming techniques—should work elsewhere.

Nothing new about people giving the big-ups to sustainable farming, of course… but to see it lauded in a venue like The Economist (alongside an admission that there’s a food crisis on the way, and that the Demographic Formerly Known As The First World is in the firing line too) is a new one, at least to me. Are we seeing a shift in attitude in business and government – a recognition that the long game is the only one in town, if you want there to still be a town when the game is over?

Presenting the fact and fiction of tomorrow since 2001