Wired suggests supplementary skill-sets for the 21st Century

Dovetailing rather neatly with Kevin Kelly’s piece on technological literacy last week, Wired has an oddly-formatted but provocative piece that they’ve entitled 7 Essential Skills That You Didn’t Learn In College. Those seven skills are:

All fairly pertinent, and very Futurismic as well… though I’m not sure how “essential” the remix culture aspect is. I’m inclined to think – perhaps uncharitably – that anyone aspiring to be an artist or creator who hasn’t already grasped those basic truths by observing the world around them is never going to get it, no matter how clearly you spell it out for them. Am I being unfair?

It’s a decent enough list, but not exhaustive by any means – what would you add to it? Or, equally, what would you remove?

The (intellectual) hero’s journey: science/Tarot mash-up

Way to push a bunch of my geek-buttons at once: mapping Joseph Campbell’s ur-story of the hero’s journey onto the scientific method and philosophy, via the rich and deep symbolic interface of the Tarot deck [via Metafilter; more card images on the Science Tarot Facebook page].

Not sure how practical it would be (for science or for cartomancy – though as a po-mo method of storying science I think it’s a rather brilliant idea), nor whether some of the scientists portrayed would be particularly keen on the idea (can’t see ol’ Carl Sagan giving the thumbs-up to a grafting of occultism onto the tree of science, really), but for someone like myself who spent a great deal of time immersed in the symbolic structures of occultism and science at the same time, it’s a rather charming synthesis, not to mention one of those “why didn’t I think of that?” moments.

In prosthetics, one size does not fit all

Here’s an interesting piece at Wired UK; a group of prosthetic limb specialists were doing some final user-satisfaction research, and discovered that the all-singing-all-dancing does-it-all-for-you system they’d made wasn’t what the users really wanted, and for very different reasons:

In addition to weakening physical control, MS often impairs attention and memory, and the complexity of the arm’s controls overwhelmed them. At that time, the arm’s sensors and AI were much more limited, and users were mostly frustrated by its complicated controls.

For these patients, according to Behal, something that might seem as simple as scratching their heads was a prolonged struggle. They needed something that took the guesswork of movement, rotation, and force out of the equation.

The quadriplegics at Orlando Health were the opposite. They were cognitively high-functioning, and some had experience with computers or video games. All had ample experience using assistive technology. Regardless of the extent of their disability or whether they were using a touchscreen, mouse, joystick, or voice controls, they preferred using the arm on manual. The more experience they had with tech, the happier they were.

Anyone with commercial tech experience, even if only in retail, will be aware that one size (or in this case, one functionality) very rarely fits all; interesting to see that revelation filtering in to medical tech research. The more canny crews will start working closely with potential usergroups earlier in the development cycle.

They’re being philosophical about it, though:

“We stay engaged when our capabilities are matched by our challenges and our opportunities,” Bricout said. If that balance tilts too far to one direction, we get anxious; if it tilts to the other, we get bored. Match them, and we’re at our happiest, most creative, and most productive.

Behal and Bricout hadn’t anticipated, for example, that users operating the arm using the manual mode would begin to show increased physical functionality.

“There’s rehabilitation potential here,” Bricout said. Thinking through multiple steps to coordinate and improve physical actions “activated latent physical and cognitive resources… It makes you rethink what rehabilitation itself might mean.”

There’s your silver lining, huh? But it’s still a bit depressing to see this as the closer:

“You have to listen to users,” Behal said. “If they don’t like using the technology, they won’t. Then it doesn’t matter how well it does its job.”

How has it taken that lesson so long to reach the technological wings of the academy? Still, better late than never, I guess…

Is transhumanism the most dangerous idea in the world? (Hint: probably not.)

Kyle Munkittrick is making waves over at the Discover Magazine Science Not Fiction blog; he decided to air the transhumanist movement’s ideas in a post entitled “The Most Dangerous Idea In The World“.

Given that Discover is a fairly mainstream (if geeky) publication, there was a fair bit of fervent push-back in the comments thread, so Munkittrick collected together the five most common riffs for rebuttal, creating one of the most lucid and reasonable “don’t panic” posts about transhumanism in a mainstream publication that I think I’ve ever seen. His bounce-back against accusations of [transhumanism=eugenics=evil] is particularly good, and broadly applicable:

Eugenics, like any technology, is neutral. “Eu” is actually the Greek root for “good.” The problem is that over history a lot of nasty people felt that they should be able to force their definition of “good” on others. Though Hitler is a common example, there was a eugenics program in the US for quite sometime that coercively sterilized those deemed unworthy to reproduce, due to race, economic status, and mental condition. Both programs are considered “negative eugenics” in that they prevent unwanted individuals from reproducing. Positive eugenics is different in two key ways. The first is that it is entirely voluntary. Whether parents want to merely screen for potential diseases, fine-tune every detail of their child’s traits, or leave the whole thing to chance is their prerogative. The second difference is that there is no “ideal”–the process is open ended. Instead of eugenics having a state-decreed goal like blond hair and blue eyes, every parent would decide what is best for their child. As most people want healthy, intelligent, happy children, those traits are what would define the “good” of positive eugenics.

It’s interesting to watch transhumanism entering mainstream consciousness; there was that widely-linked “Open Letter to Christian Leaders on Biotechnology and The Future Of Man” doing the rounds a week or so ago, and it’s a topic that keeps cropping up in non-geek media channels with increasing regularity, probably because it pushes every future-shock techno-fear button on the switchboard.

It’s also going to be interesting to watch how transhumanism reacts to increased scrutiny, because it’s a long way from being a monoculture. The last few years have seen the more serious and level-headed advocates (I’m thinking of folk like George Dvorsky and Mike Anissimov, who are the two I’ve been reading for the longest) working hard to present a coherent, rational and non-incendiary platform for debate… but just as with any subculture, there are some real oddballs in the architecture, and it’s the cranks who tend to shout loudest and attract attention, often negative. Interesting times ahead…

Bonus: Michael Anissimov points to Eliezer Yudkowsky’s “5 minute introduction” to the concept of the Technological Singularity, which is also pretty plainly-put. Of course, the Technological Singularity shouldn’t be conflated with transhumanism, but it’s a closely related idea, and is sometimes treated as an ideology rather than a theory by those more vocal and marginal elements to whom I referred earlier… so it behoves the wise to understand both as best they can. 🙂

Cortical coprocessors: an outboard OS for the brain

The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:

Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.

Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…

Presenting the fact and fiction of tomorrow since 2001