Tag Archives: biotech

H+ zero-day vulnerabilities, plus cetacean personhood

Couple of interesting nuggets here; first up is a piece from Richard Yonck at H+ Magazine on the risks inherent to the human body becoming an augmented and extended platform for technologies, which regular readers will recognise as a fugue on one of my favourite themes, Everything Can And Will Be Hacked. Better lock down your superuser privileges, folks…

In coming years, numerous devices and technologies will become available that make all manner of wireless communications possible in or on our bodies. The standards for Body Area Networks (BANs) are being established by the IEEE 802.15.6 task group. These types of devices will create low-power in-body and on-body nodes for a variety of medical and non-medical applications. For instance, medical uses might include vital signs monitoring, glucose monitors and insulin pumps, and prosthetic limbs. Non-medical applications could include life logging, gaming and social networking. Clearly, all of these have the potential for informational and personal security risks. While IEEE 802.15.6 establishes different levels of authentication and encryption for these types of devices, this alone is no guarantee of security. As we’ve seen repeatedly, unanticipated weaknesses in program logic can come to light years after equipment and software are in place. Methods for safely and securely updating these devices will be essential due to the critical nature of what they do. Obviously, a malfunctioning software update for something as critical as an implantable insulin pump could have devastating consequences.

Yonck then riffs on the biotech threat for a while; I’m personally less worried about the existential risk of rogue biohackers releasing lethal plagues, because the very technologies that make that possible are also making it much easier to defeat those sorts of pandemics. (I’m more worried about a nation-state releasing one by mistake, to be honest; there’s precedent, after all.)

Of more interest to me (for an assortment of reasons, not least of which is a novel-scale project that’s been percolating at the back of my brainmeat for some time) is his examination of the senses as equivalent to ‘ports’ in a computer system; those I/O channels are ripe for all sorts of hackery and exploits, and the arrival of augmented reality and brain-machine interfaces will provide incredibly tempting targets, be it for commerce or just for the lulz. Given it’s taken less than a week for the self-referential SEO hucksters and social media gurus douchebags to infest the grouting between the circles of Google+, forewarned is surely forearmed… and early-adopterdom won’t be much of a defence. (As if it ever was.)

Meanwhile, a post at R U Sirius’ new zine ACCELER8OR (which, given its lack of by-line, I assume to be the work of The Man Himself) details the latest batch of research into advanced sentience in cetaceans. We’ve talked about dolphin personhood before, and while my objections to the enshrinement of non-human personhood persist (I think we’re wasting time by trying to get people to acknowledge the rights of higher animals when we’ve still not managed to get everyone to acknowledge the rights of their fellow humans regardless of race, creed or class) it’s still inspiring and fascinating to consider that, after years of looking into space for another sentient species to make contact with, there’s been one swimming around in the oceans all along.

Dovetailing with Yonck’s article above, this piece extrapolates onward to discuss the emancipation of sentient machines. (What if your AI-AR firewall system suddenly started demanding a five-day working week?)

A recent Forbes blog poses a key question on the issue of AI civil rights: if an AI can learn and understand its programming, and possibly even alter the algorithms that control its behavior and purpose, is it really conscious in the same way that humans are? If an AI can be programmed in such a fashion, is it really sentient in the same way that humans are?

Even putting aside the hard question of consciousness, should the hypothetical AIs of mid-century have the same rights as humans?  The ability to vote and own property? Get married? To each other? To humans? Such questions would make the current gay rights controversy look like an episode of “The Brady Bunch.”

Of course, this may all a moot point given the existential risks faced by humanity (for example, nuclear annihilation) as elucidated by Oxford philosopher Nick Bostrom and others.  Or, our AIs actually do become sentient, self-reprogram themselves, and “20 minutes later,” the technological singularity occurs (as originally conceived by Vernor Vinge).

Give me liberty or give me death? Until an AI or dolphin can communicate this sentiment to us, we can’t prove if they can even conceptualize such concepts as “liberty” or “death.” Nor are dolphins about to take up arms anytime soon even if they wanted to — unless they somehow steal prosthetic hands in a “Day of the Dolphin”-like scenario and go rogue on humanity.

It would be mighty sad were things to come to that… but is anyone else thinking “that would make a brilliant movie”?

Garage ribofunk going mainstream

Interesting to see it’s taken less than a year for coverage of DIY molecular biology to graduate from the comparative fringedom of H+ Magazine to a mainstream science publication like Nature [via SlashDot]. Notable lack of scare-stories and hand-wringing involved, too… though I suspect we’ll have this meme picked up by the tabloids before the end of the year; that’s a nice juicy OMG-terror-security-panic!!1 story just waiting to shift units to the easily frightened, right there.

What’s impressive is the level of sophistication involved, which (as others have pointed out) mimics the enthusiastic adoption of home computing by the cutting edge of geek enthusiasts back in the day:

Many traditional scientists are circumspect. “I think there’s been a lot of overhyped and enthusiastic writing about this,” says Christopher Kelty, an anthropologist at the University of California, Los Angeles, who has followed the field. “Things are very much at the beginning stages.” Critics of DIY biology are also dubious about whether there is an extensive market for garage molecular biology. No one needs a PCR machine at home, and the accoutrements to biological research are expensive, even if their prices fall daily. Then again, the same was said about personal computers, says George Church, a geneticist at Harvard Medical School in Boston, Massachusetts. As a schoolboy, he says, he saw his first computer and fell in love. “Everybody looked at me like, ‘Why on earth would you even want to have one of those?'”

[…]

No one knows how many of those 2,000 are serious practitioners — Bobe jokes that 30% are spammers and the other 70% are law-enforcement officials keeping tabs on the community. But many DIY communities are coalescing: not only in Cambridge, but also in New York, San Francisco, London, Paris and the Netherlands. Some of these aim to develop community lab spaces with equipment that users could share for a monthly fee. And several are already affiliated with local ‘hacker spaces’, which provide such services to electronics enthusiasts. For example, the New York DIYbio group meets every week at the work-space of an electronics-hacker collective called NYC Resistor, which now has a few pieces of basic molecular biology equipment, including a PCR machine.

Of course, there are real risks that come with the growth of a movement like this, but there’s also a whole lot of potential, which I think outweighs the risks if they’re managed sensibly (i.e. by oversight, transparency and strong networked communities, rather than by blanket bans and heavy-handed restrictions that would drive the movement underground, as well as potentially into a position of political radicalism). Viewed in parallel with the surge of interest in 3d printing and electromechanical hacktivism (which really is spreading very fast, alongside the hacker spaces that house them), things don’t look entirely unlike some unpublished proto-prequel to Bruce Sterling’s Schismatrix. Who will you be: Mechanist or Shaper?

Replacement arms: mechanical or biological?

Prosthetic limbs are still in their infancy, but there’s a lot of progress being made: Johns Hopkins Applied Physics Laboratory is working with Darpa (who else?), and has a research grant for trying out their mind-controlled modular prosthetic arm on five test subjects over the next couple of years [via SlashDot]:

Phase III testing – human subjects testing – will be used to tweak the system, both improving neural control over the limb and optimizing the algorithms which generate sensory feedback. The Modular Prosthetic Limb (MPL) is the product of years of prototype design – it includes 22 degrees of motion, allows independent control of all five fingers, and weighs the same as a natural human arm (about nine pounds). Patients will control the MPL with a surgically implanted microarray which records action potentials directly from the motor cortex.

Researchers plan to install the first system into a quadriplegic patient; while amputees can be outfitted with traditional prostheses, the MPL will be the first artificial limb that can sidestep spinal cord injury by plugging directly into the brain.

Great news, then, but it’s still a crude kludge compared to the original. Building a new biological limb from the ground up is way beyond our biotech capabilities as they stand… but our own bodies do a pretty good job of it when we’re developing in the womb, and young children can sometime regrow fully functional fingertips lost to accidents. So why can’t we make like salamanders and just sprout replacement limbs? It’s a vexing question, and extremely clever people are working hard to work out the answer. (You’ll have to go read the whole article, because it’s too full of proper science for one or two pulled paragraphs to do it justice.)

Blue-sky bioengineering on the DARPA drawing-board

If you’re looking for the sort of bat-shit Faustian gambles that form the back-bone of much military science fiction, following the news from the Pentagon’s science and tech division is like supergluing your lips to a firehose… and Wired’s DangerRoom blog is one of the better consumer-level sources to start with (if you don’t mind a bit of snark on the side).

Here’s DangerRoom‘s Katie Drummond on DARPA’s latest wheeze: immortal synthetic organisms with a built-in molecular kill-switch. SRSLY.

As part of its budget for the next year, Darpa is investing $6 million into a project called BioDesign, with the goal of eliminating “the randomness of natural evolutionary advancement.” The plan would assemble the latest bio-tech knowledge to come up with living, breathing creatures that are genetically engineered to “produce the intended biological effect.” Darpa wants the organisms to be fortified with molecules that bolster cell resistance to death, so that the lab-monsters can “ultimately be programmed to live indefinitely.”

Of course, Darpa’s got to prevent the super-species from being swayed to do enemy work — so they’ll encode loyalty right into DNA, by developing genetically programmed locks to create “tamper proof” cells. Plus, the synthetic organism will be traceable, using some kind of DNA manipulation, “similar to a serial number on a handgun.” And if that doesn’t work, don’t worry. In case Darpa’s plan somehow goes horribly awry, they’re also tossing in a last-resort, genetically-coded kill switch:

“Develop strategies to create a synthetic organism “self-destruct” option to be implemented upon nefarious removal of organism.”

The project comes as Darpa also plans to throw $20 million into a new synthetic biology program, and $7.5 million into “increasing by several decades the speed with which we sequence, analyze and functionally edit cellular genomes.”

That post goes on to quote a professor of biology, who’s keen to point out that DARPA’s view of evolution as a random string of events is going to prove a major stumbling block to any attempts to “improve” the process. As to what sort of genuine advantage over extant military technologies these synthetic organisms would have, the pertinent questions are absent, as are those dealing with the moral and ethical issues surrounding military meddling with fundamental biological processes, and the unexpected ways in which they might go wrong. And to hark back to an earlier post from today: would killing a bioengineered military organism be a legitimate act of war?

Also absent (but somewhat implicit, depending on your personal politics) are any observations that the world’s biggest military budget shows no sign of helping the US gain the upper hand against a nebulous and underfunded enemy armed predominantly with a fifty-year-old machine gun design and explosives expertise that’s a short step up from the Anarchist’s Cookbook… I’m all for wild ideas and blue-sky thinking, but I’m not sure they’re much use as a military panacea any more. The days of peace through superior firepower are long gone, and the more complex you make your weapons, the more likely they are to blow up in your face.