Tag Archives: cyborg

Replacement arms: mechanical or biological?

Prosthetic limbs are still in their infancy, but there’s a lot of progress being made: Johns Hopkins Applied Physics Laboratory is working with Darpa (who else?), and has a research grant for trying out their mind-controlled modular prosthetic arm on five test subjects over the next couple of years [via SlashDot]:

Phase III testing – human subjects testing – will be used to tweak the system, both improving neural control over the limb and optimizing the algorithms which generate sensory feedback. The Modular Prosthetic Limb (MPL) is the product of years of prototype design – it includes 22 degrees of motion, allows independent control of all five fingers, and weighs the same as a natural human arm (about nine pounds). Patients will control the MPL with a surgically implanted microarray which records action potentials directly from the motor cortex.

Researchers plan to install the first system into a quadriplegic patient; while amputees can be outfitted with traditional prostheses, the MPL will be the first artificial limb that can sidestep spinal cord injury by plugging directly into the brain.

Great news, then, but it’s still a crude kludge compared to the original. Building a new biological limb from the ground up is way beyond our biotech capabilities as they stand… but our own bodies do a pretty good job of it when we’re developing in the womb, and young children can sometime regrow fully functional fingertips lost to accidents. So why can’t we make like salamanders and just sprout replacement limbs? It’s a vexing question, and extremely clever people are working hard to work out the answer. (You’ll have to go read the whole article, because it’s too full of proper science for one or two pulled paragraphs to do it justice.)

Bacterial biker jackets and after-market parts for people

This year seems like it’ll be the one where the mainstream starts talking about custom-made replacement organs as something more than science fiction. A few weeks back we heard about the rat who got a new set of lab-grown lungs; this week, Wired is running a photo-essay on bioprinting that’s a must-see for anyone who wants to be able to write a plausible description of the working environment of a contemporary Frankenstein.

Bioreactor - image credited to Dave Bullock/Wired.com

Meanwhile [via BoingBoing] Ecouterre reports on UK-based designer Suzanne Lee, who’s been using bacteria to grow an entire range of clothing from a rather mundane starting point – sweetened green tea. The end results are made entirely of cellulose, though they look (to me at least) like the skin of something that still slinks through radiation-soaked cities long after the posthumans abandoned Earth for the new terrain at the top of the gravity well…

Bio-couture jacket by Suzanne Lee

Organic ain’t yer only option, though, no sir. 3D printing means one-off custom designs of mechanical prosthetic limb can be made for amputees or other folk with different levels of physical ability… and not just for us longpigs, either, as Oscar the cyborg cat ably demonstrates. 3D printing is still an unevenly distributed piece of the future, of course, but it’s spreading fast; Ponoko have just set up their first 3D print hub here in the UK, and if they can afford to do that in the current economic climate, the business model must have something going for it, right?

It’s interesting to see the organic and inorganic racing along in parallel like this; it doesn’t take a genius to see the possibilities of the two streams converging somewhere down the line, though I’d guess that’s a good few decades off from the present day. What’s interesting to me about these phenomena is the way they seem to be an end-game expression of the desire for individuality and customisation; at the moment, price will keep all but those with a serious need for these products out of the market, but as prices fall, everything will become bespoke, unique, a one-off. Which is kind of ironic if you think about it: through the total ubiquity of mechanised manufacture, we’re actually putting an end to mass production.

Copyright and the Eyeborg

Well, so much for my ability to see potential conflicts arising from new technologies; when we were talking about Rob “Eyeborg” Spence the other day, it never even occured to me that live streaming video from a human-implanted camera would open a massive can of copyright worms.

… what happens when he goes to the movies? Or, what if he goes to a sporting event with an exclusive broadcast right?

Quite. Obviously it’s not far beyond being a purely hypothetical issue at the moment, but wind forward a decade to a point where AR spex and similar hardware are as ubiquitous as smartphones are now, and you’ve got a real legal minefield around infringement techniques which will be difficult to police… just like we have right now, in other words, only more so.

At least we know the lawyers won’t be going hungry.

Canuck filmmaker considers streaming live video from his bionic eye

Well, this sidesteps the clunky implementations of lifelogging that we’ve seen so far. Rob Spence lost the vision in his tright eye in a shooting accident, and decided to replace it with a small camera unit, making it onto Time Magazine‘s best inventions list for 2009 (even though they’ve only had the thing working properly for a short time).

Now Spence’s eye has a wi-fi transmitter that can stream its video output to a computer; from there, it’s a short step to making Spence’s field of vision a free-to-view live feed available to anyone with an internet connection [via SlashDot]. There are some minor technical issues to iron out first, though:

The prototype in the video provides low-res images, but an authentic experience of literally seeing through someone else’s perspective. The image is somewhat jerky and overhung by huge eyelashes; a blink throws everything out of whack for a half-second.

[…]

The Eyeborg prototype in the video, the third, can only work for an hour an a half on a fully charged battery. Its transmitter is quite weak, so Spence has to hold a receiving antenna to his cheek to get a clear signal. He muses that he should build a Seven of Nine-style eyepiece to house it. He’s experimenting with a new prototype that has a stronger transmitter, other frequencies and a booster on the receiver.

It surely won’t be all that long before equivalent hardware could be slipped into a fully-functional biological eye… possibly without the knowledge or permission of the eye’s owner. Which suggests that the tin-foil bonnet brigade will upgrade their fears of surveillance through compromised cell phones to a fear of covertly-implanted audio and video capture devices… hey, it could happen, man*.

[ * Though this assumes, as do most such paranoid conspiracy theories, a level of competence, clandestine secrecy and forward planning of which most nation-state governments seem utterly incapable. I wouldn’t credit the UK government with the ability to successfully tap a barrel of beer, let alone my eyesight… and if they did somehow pull it off, they’d only go and leave the footage on the back seat of a bus. ]

Maybe it doesn’t matter that the internet is “making us stupid”

High-profile internet-nay-sayer and technology curmudgeon Nick Carr is cropping up all over the place; these things happen when one has a new book in the offing, y’know*. He’s the guy who claims that Google is making us stupid, that links embedded in HTML sap our ability to read and understand written content (cognitive penalties – a penalty that even the British can do properly, AMIRITE?), and much much more.

The conclusions of Carr’s new book, The Shallows – that, in essence, we’re acquiring a sort of attention deficit problem from being constantly immersed in a sea of bite-sized and interconnected info – have been given a few polite kickings, such as this one from Jonah Lehrer at the New York Times. I’ve not read The Shallows yet, though I plan to; nonetheless, from the quotes and reviews I’ve seen so far, it sounds to me like Carr is mapping the age-related degradation of his own mental faculties onto the world as a whole, and looking for something to blame.

I should add at this point that, although I disagree with a great number of Carr’s ideas, he’s a lucid thinker, and well worth reading. As Bruce Sterling points out, grumpy gadfly pundits like Carr are useful and necessary for a healthy scene, because the urge to prove them wrong drives further innovation, thinking, research and development. He’s at least as important and worth reading as the big-name webvangelists… who all naturally zapped back at Carr’s delinkification post with righteous wrath and snark. The joy of being a mere mortal is, surely, to watch from a safe point of vantage while the gods do battle… 😉

But back to the original point: there’s always a trade-off when we humans acquire new technologies or skills, and what’s missing from commentators decrying these apparent losses is any suggestion that we might be gaining something else – maybe something better – as part of the deal; technological symbiosis is not a zero-sum game, in other words. Peripherally illustrating the point, George Dvorsky points to some research that suggests that too good a memory is actually an evolutionary dead end, at least for foraging mammals:

These guys have created one of the first computer models to take into account a creature’s ability to remember the locations of past foraging successes and revisit them.

Their model shows that in a changing environment, revisiting old haunts on a regular basis is not the best strategy for a forager.

It turns out instead that a better approach strategy is to inject an element of randomness into a regular foraging pattern. This improves foraging efficiency by a factor of up to 7, say Boyer and Walsh.

Clearly, creatures of habit are not as successful as their opportunistic cousins.

That makes sense. If you rely on that same set of fruit trees for sustenance, then you are in trouble if these trees die or are stripped by rivals. So the constant search for new sources food pays off, even if it consumes large amounts of resources. “The model forager typically spends half of its traveling time revisiting previous places in an orderly way, an activity which is reminiscent of the travel routes used by real animals, ” say Boyer and Walsh.

They conclude that memory is useful because it allows foragers to find food without the effort of searching. “But excessive memory use prevents the forager from updating its knowledge in rapidly changing environments,” they say.

This reminds me of the central idea behind Peter Watts’ Blindsight – the implication that intelligence itself, which we tend to think of as the inevitable high pinnacle of evolutionary success, is actually a hideously inefficient means to genetic survival, and that as such, we’re something of an evolutionary dead end ourselves. Which reminds me in turn of me mentioning evolutionary “arms races” the other day; perhaps, instead of being in an arms race against our own cultural and technological output as a species, we’re entering a sort of counterbalancing symbiosis with it. Should we start considering technology as a part of ourselves rather than a separate thing? Are we not merely a species of cyborgs, but a cyborg species?

[ * The irony here being that almost all the discussion and promotion of Carr’s work that does him any good occurs… guess where? Hint: not in brick’n’mortar bookstores. ]