Tag Archives: H+

H+ trailer: a post-McLuhanist reading

So, this has been doing the rounds since its release at SDCC (which – given by what I’ve seen of it from blogs, Twitter and elsewhere – is less a convention and more some sort of fundamental rupture of reality that lets a million weird facets of pop culture manifest in the material world for a weekend); my first spot of it was at SF Signal, so they get the hat-tip. It’s the trailer for a forthcoming web-native series called H+

And here’s the blurb for those of you who can’t or won’t watch videos:

H+: The Digital Series takes viewers on a journey into an apocalyptic future where technology has begun to spiral out of control…a future where 33% of the world’s population has retired its cell phones and laptops in favor of a stunning new device – an implanted computer system called H+.

This tiny tool allows the user’s own mind and nervous system to be connected to the Internet 24 hours a day. But something else is coming… something dark and vicious… and within seconds, billions of people will be dead… opening the door to radical changes in the political and social landscape of the planet — prompting survivors to make sense of what went wrong.

Hmmm. So, what can we take from this? First off, “H+” or human augmentation as a cultural meme is strong enough on the geek fringes that someone thinks it’s a marketable theme for popular drama; this in itself is a very interesting development from the perspective of someone who chronicles and observes the transhumanist movement(s), because it’s a sign that traditionally science fictional or cyberpunkish ideas are being presented as both plausible and imminent*. Meme’s gonna go mainstream, yo.

Secondly, and less surprisingly, the underlying premise appears to be The Hubris Of Technology Will All But Annihilate Our Species, with a sideserving/undercurrent of Moral Panic. Handwringing over the potentially corrosive-to-civilisation properties of social media is common currency (as regular readers will be only too aware already), which means the soil is well-tilled for the seed of Singer’s series; it’s a contemporary twist on the age-old apocalypse riff, and that never gets old. Too early to tell whether the Hairshirt Back-To-The-Earth philosophy is going to be used as solution paradigm, but I’d be willing to put money on it making a significant showing. This is disappointing, but inevitable; as Kyle Munkittrick points out in his brief overview of the new Captain America movie, comics and Hollywood default to the portrayal of human augmentation as either an accident born of scientific hubris or the tainted product of a Frankensteinian corporation:

In what seems like every other superhero origin story, powers are acquired through scientific hubris. Be it the unintended consequences of splitting the atom, tinkering with genetics, or trying to access some heretofore unknown dimension, comic book heroes invariably arise by accident.

[…]

Normally, those who seek superpowers are unworthy because they believe they deserve to be better than others, thus, the experiments go wrong.

Yeah, that’s about right. And the choice of series title is very fortuitous; the avalanche of early responses drawing analogies to Google+ has probably already started on the basis of that trailer alone, which is going to annoy me just as much as Googlephobia does. I’ve been rereading Marshall McLuhan lately (in part so I could write a piece for his 100th birthday at Wired UK), and was struck by how calmly and persistently he insisted that making moral judgements of technologies was futile; indeed, he took the position that by spending less effort on judging our technologies, we might clear the moral fog that exists around our actual lives. In McLuhan’s thought, media are extensions of ourselves into time and space; it seems to me that the biggest problem they cause isn’t a moral degradation of humanity, but the provision of a convenient proxy to blame our human problems on: it woz the intertubes wot dun it.

There is an inevitability to the technological moral panic as popular narrative, though, and that’s underlined by its admirable persistence over time – as TechDirt‘s Mike Masnick reminds us, they’re at least as old as Gutenburg’s printing press (and we’re still here, as yet untoppled by our technological revolutions). Masnick also links to a WSJ blog piece that bounces off the research of one Genevieve Bell, director of Intel Corporation’s Interaction and Experience Research, who reiterates the persistence of the technological moral panic over time, and points out that it tends to locate itself in the bodies of women and children:

There was, she says, an initial pushback about electrifying homes in the U.S.: “If you electrify homes you will make women and children and vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them. So electricity is going to make women vulnerable. Oh and children will be visible too and it will be predators, who seem to be lurking everywhere, who will attack.

“There was some wonderful stuff about [railway trains] too in the U.S., that women’s bodies were not designed to go at 50 miles an hour. Our uteruses would fly out of our bodies as they were accelerated to that speed.”

She has a sort of work-in-progress theory to work out which technologies will trigger panic, and which will not.

  • It has to change your relationship to time.
  • It has to change your relationship to space.
  • It has to change your relationship to other people.

And, says Ms. Bell, it has to hit all three, or at least have the potential to hit them.

Interesting stuff, including a riff on comedy as a feedback loop in culture that enables us to control and mitigate the boundaries of what is acceptable with a new technology or medium. But as Bell points out, the march of technological change won’t wait for us to catch up with it; this state of technological angst has persisted for centuries, and will likely persist for as long as we remain a technologised species. Which means the doomsayers (and doomsayer media like H+) ain’t going anywhere… but going on past form, I’m going to assume we’ll find a way to ride it out and roll with the punches.

And just in case you were expecting a more standard blogger response to a television series trailer: yeah, I’ll probably watch H+, at least for long enough to see if it’s a good story well told; it looks like it might well be, regardless of the source of the narrative hook.

What about you?

[ * Which isn’t to say that the plot device in H+ will necessarily be scientifically plausible as it gets presented. Indeed, I rather suspect there’ll be some Unified Quantum Handwave Theory and/or Unobtainium involved… but the portrayal of social media as an internalised technology in the human body within a contemporary fictional milieu? That’s something I’ve not seen anywhere other than text media (books, stories, comics) thus far. ]

H+ zero-day vulnerabilities, plus cetacean personhood

Couple of interesting nuggets here; first up is a piece from Richard Yonck at H+ Magazine on the risks inherent to the human body becoming an augmented and extended platform for technologies, which regular readers will recognise as a fugue on one of my favourite themes, Everything Can And Will Be Hacked. Better lock down your superuser privileges, folks…

In coming years, numerous devices and technologies will become available that make all manner of wireless communications possible in or on our bodies. The standards for Body Area Networks (BANs) are being established by the IEEE 802.15.6 task group. These types of devices will create low-power in-body and on-body nodes for a variety of medical and non-medical applications. For instance, medical uses might include vital signs monitoring, glucose monitors and insulin pumps, and prosthetic limbs. Non-medical applications could include life logging, gaming and social networking. Clearly, all of these have the potential for informational and personal security risks. While IEEE 802.15.6 establishes different levels of authentication and encryption for these types of devices, this alone is no guarantee of security. As we’ve seen repeatedly, unanticipated weaknesses in program logic can come to light years after equipment and software are in place. Methods for safely and securely updating these devices will be essential due to the critical nature of what they do. Obviously, a malfunctioning software update for something as critical as an implantable insulin pump could have devastating consequences.

Yonck then riffs on the biotech threat for a while; I’m personally less worried about the existential risk of rogue biohackers releasing lethal plagues, because the very technologies that make that possible are also making it much easier to defeat those sorts of pandemics. (I’m more worried about a nation-state releasing one by mistake, to be honest; there’s precedent, after all.)

Of more interest to me (for an assortment of reasons, not least of which is a novel-scale project that’s been percolating at the back of my brainmeat for some time) is his examination of the senses as equivalent to ‘ports’ in a computer system; those I/O channels are ripe for all sorts of hackery and exploits, and the arrival of augmented reality and brain-machine interfaces will provide incredibly tempting targets, be it for commerce or just for the lulz. Given it’s taken less than a week for the self-referential SEO hucksters and social media gurus douchebags to infest the grouting between the circles of Google+, forewarned is surely forearmed… and early-adopterdom won’t be much of a defence. (As if it ever was.)

Meanwhile, a post at R U Sirius’ new zine ACCELER8OR (which, given its lack of by-line, I assume to be the work of The Man Himself) details the latest batch of research into advanced sentience in cetaceans. We’ve talked about dolphin personhood before, and while my objections to the enshrinement of non-human personhood persist (I think we’re wasting time by trying to get people to acknowledge the rights of higher animals when we’ve still not managed to get everyone to acknowledge the rights of their fellow humans regardless of race, creed or class) it’s still inspiring and fascinating to consider that, after years of looking into space for another sentient species to make contact with, there’s been one swimming around in the oceans all along.

Dovetailing with Yonck’s article above, this piece extrapolates onward to discuss the emancipation of sentient machines. (What if your AI-AR firewall system suddenly started demanding a five-day working week?)

A recent Forbes blog poses a key question on the issue of AI civil rights: if an AI can learn and understand its programming, and possibly even alter the algorithms that control its behavior and purpose, is it really conscious in the same way that humans are? If an AI can be programmed in such a fashion, is it really sentient in the same way that humans are?

Even putting aside the hard question of consciousness, should the hypothetical AIs of mid-century have the same rights as humans?  The ability to vote and own property? Get married? To each other? To humans? Such questions would make the current gay rights controversy look like an episode of “The Brady Bunch.”

Of course, this may all a moot point given the existential risks faced by humanity (for example, nuclear annihilation) as elucidated by Oxford philosopher Nick Bostrom and others.  Or, our AIs actually do become sentient, self-reprogram themselves, and “20 minutes later,” the technological singularity occurs (as originally conceived by Vernor Vinge).

Give me liberty or give me death? Until an AI or dolphin can communicate this sentiment to us, we can’t prove if they can even conceptualize such concepts as “liberty” or “death.” Nor are dolphins about to take up arms anytime soon even if they wanted to — unless they somehow steal prosthetic hands in a “Day of the Dolphin”-like scenario and go rogue on humanity.

It would be mighty sad were things to come to that… but is anyone else thinking “that would make a brilliant movie”?