‘One of the problems facing video game writing is a systemic failure to place games in their correct historical context’
What this generally means is that writers fail to open their reviews with a lengthy diatribe on the history of this or that genre. While I think that there is definitely a place for that type of opening and am quite partial to it myself, I think that the real problem of context is far more local and far less high-minded. The true problem of context is that how you experience a particular video game is likely to be determined by the games you played immediately before. For example, if you move from playing one version of Civilization to the next then the thing that is most likely stand out is the developers’ latest fine-tuning of the game’s basic formula. Conversely, if you pick up Civilization V after Europa Universalis III, you will most likely be struck by the weakness of the AI and the lack of control you have over your own economy. Aesthetic reactions, like all reactions, are highly contextual. This much was evident in the reaction to Eidos Montreal’s recent reboot of the Deus Ex franchise entitled Deus Ex: Human Revolution. Continue reading The Shameful Joys of Deus Ex: Human Revolutions→
So, this has been doing the rounds since its release at SDCC (which – given by what I’ve seen of it from blogs, Twitter and elsewhere – is less a convention and more some sort of fundamental rupture of reality that lets a million weird facets of pop culture manifest in the material world for a weekend); my first spot of it was at SF Signal, so they get the hat-tip. It’s the trailer for a forthcoming web-native series called H+…
And here’s the blurb for those of you who can’t or won’t watch videos:
H+: The Digital Series takes viewers on a journey into an apocalyptic future where technology has begun to spiral out of control…a future where 33% of the world’s population has retired its cell phones and laptops in favor of a stunning new device – an implanted computer system called H+.
This tiny tool allows the user’s own mind and nervous system to be connected to the Internet 24 hours a day. But something else is coming… something dark and vicious… and within seconds, billions of people will be dead… opening the door to radical changes in the political and social landscape of the planet — prompting survivors to make sense of what went wrong.
Hmmm. So, what can we take from this? First off, “H+” or human augmentation as a cultural meme is strong enough on the geek fringes that someone thinks it’s a marketable theme for popular drama; this in itself is a very interesting development from the perspective of someone who chronicles and observes the transhumanist movement(s), because it’s a sign that traditionally science fictional or cyberpunkish ideas are being presented as both plausible and imminent*. Meme’s gonna go mainstream, yo.
Secondly, and less surprisingly, the underlying premise appears to be The Hubris Of Technology Will All But Annihilate Our Species, with a sideserving/undercurrent of Moral Panic. Handwringing over the potentially corrosive-to-civilisation properties of social media is common currency (as regular readers will be only too aware already), which means the soil is well-tilled for the seed of Singer’s series; it’s a contemporary twist on the age-old apocalypse riff, and that never gets old. Too early to tell whether the Hairshirt Back-To-The-Earth philosophy is going to be used as solution paradigm, but I’d be willing to put money on it making a significant showing. This is disappointing, but inevitable; as Kyle Munkittrick points out in his brief overview of the new Captain America movie, comics and Hollywood default to the portrayal of human augmentation as either an accident born of scientific hubris or the tainted product of a Frankensteinian corporation:
In what seems like every other superhero origin story, powers are acquired through scientific hubris. Be it the unintended consequences of splitting the atom, tinkering with genetics, or trying to access some heretofore unknown dimension, comic book heroes invariably arise by accident.
Normally, those who seek superpowers are unworthy because they believe they deserve to be better than others, thus, the experiments go wrong.
Yeah, that’s about right. And the choice of series title is very fortuitous; the avalanche of early responses drawing analogies to Google+ has probably already started on the basis of that trailer alone, which is going to annoy me just as much as Googlephobia does. I’ve been rereading Marshall McLuhan lately (in part so I could write a piece for his 100th birthday at Wired UK), and was struck by how calmly and persistently he insisted that making moral judgements of technologies was futile; indeed, he took the position that by spending less effort on judging our technologies, we might clear the moral fog that exists around our actual lives. In McLuhan’s thought, media are extensions of ourselves into time and space; it seems to me that the biggest problem they cause isn’t a moral degradation of humanity, but the provision of a convenient proxy to blame our human problems on: it woz the intertubes wot dun it.
There was, she says, an initial pushback about electrifying homes in the U.S.: “If you electrify homes you will make women and children and vulnerable. Predators will be able to tell if they are home because the light will be on, and you will be able to see them. So electricity is going to make women vulnerable. Oh and children will be visible too and it will be predators, who seem to be lurking everywhere, who will attack.
“There was some wonderful stuff about [railway trains] too in the U.S., that women’s bodies were not designed to go at 50 miles an hour. Our uteruses would fly out of our bodies as they were accelerated to that speed.”
She has a sort of work-in-progress theory to work out which technologies will trigger panic, and which will not.
It has to change your relationship to time.
It has to change your relationship to space.
It has to change your relationship to other people.
And, says Ms. Bell, it has to hit all three, or at least have the potential to hit them.
Interesting stuff, including a riff on comedy as a feedback loop in culture that enables us to control and mitigate the boundaries of what is acceptable with a new technology or medium. But as Bell points out, the march of technological change won’t wait for us to catch up with it; this state of technological angst has persisted for centuries, and will likely persist for as long as we remain a technologised species. Which means the doomsayers (and doomsayer media like H+) ain’t going anywhere… but going on past form, I’m going to assume we’ll find a way to ride it out and roll with the punches.
And just in case you were expecting a more standard blogger response to a television series trailer: yeah, I’ll probably watch H+, at least for long enough to see if it’s a good story well told; it looks like it might well be, regardless of the source of the narrative hook.
What about you?
[ * Which isn’t to say that the plot device in H+ will necessarily be scientifically plausible as it gets presented. Indeed, I rather suspect there’ll be some Unified Quantum Handwave Theory and/or Unobtainium involved… but the portrayal of social media as an internalised technology in the human body within a contemporary fictional milieu? That’s something I’ve not seen anywhere other than text media (books, stories, comics) thus far. ]
It’s been half a year since I had to stop buying fiction to publish here, and it still nags at me every time I come to check the site for comments or write a new post. I’m very conscious that Futurismic filled a rather unique niche in the sf ecosystem; strictly near-future, almost Mundane science fiction stories still seem pretty rare elsewhere, and I was proud to be giving a place to interesting writers, new or old.
Still, I have hope that a change in my employment patterns over the next six months will allow me enough spare cash to start publishing new stories once again… though I have no idea how I’ll find the time to manage the slush pile alongside everything else I’ll be doing. In the meantime, though, I can at least link out to the sort of thing I migth have published, had I been in a position to do so… things like “My Grandfather’s Skeleton” by Kiyash Monsef, which he emailed me a link to not long ago. It’s simple, poignant and not too long, and I think you should go and read it. Here’s the animated ‘cover art’ for it, and the first few passages:
Grandpa was missing.
Sometime in the night, he’d gotten up, unhooked himself from a variety of instruments and medicated drips, and walked out of the hospital, and no one knew why, and no one knew where he was.
Dad and Mom, after getting the phone call at three in the morning, told me I should just go to school as usual and let them handle it. That morning, while my parents gave a description to a pair of police officers, I rode my bike to school, half expecting to see Grandpa sitting by the side of the road somewhere in a hospital gown.
I kept my phone on all morning, but there was no news. Grandpa Lucas had disappeared, and with his heart already feeble, each passing moment made it more and more likely that we would not see him alive again. It was impossible to pay any attention in class, and at noon I gave up and rode home.
We privileged early-adopter types are increasingly accustomed to our technology becoming obsolete… but what happens when the technology in question is actually a physically-embedded part of you? Suddenly your upgrade path is a little trickier than hopping on a Boris-Bike and going to your nearest Apple store. Tim Maly points out the risky side of early-adopter human augmentation tech:
On the ground, the realities of the only brain-mounted interface I know of – cochlear implants – are brutal. Here’s a taste: You can’t hear music. For a sense of what that’s like, try these demos. The terrifying truth is that once you’ve signed up for one kind of enhancement (say, the 16 electrode surgery) it’s very hard to upgrade, even if Moore’s law ends up applying to electrode counts and the fidelity of hearing tech.
If you are an early adopter for this kind of thing, the only thing we can say for sure about it is that it’ll be slow and out of date very soon. Unless they find a way to make easily-reversible surgery, your best strategy is to wait for the interface that’s whatever the brain-linkage equivalent is to 300dpi, full colour, high refresh screens.
Medical advancements demand sacrifices. Someone needs to wear the interim devices. Desperation is one avenue for adoption. Artificial hearts are still incomplete and dicey-half measures, keeping people alive while they wait for a transplant or their heart heals. This is where advances in transplants and prosthetics find their volunteers and their motivation for progress. It’s difficult to envision a therapeutic brain implant – they are almost by definition augmentations.
An avenue to irreversible early adoption is arenas where short term enhancement is all that’s required. The military leaps to mind. With enlistment times measured in a few short years, rapid obsolescence of implants doesn’t matter as much; they can just pull virgin recruits and give them the newest, latest. If this seems unlikely, consider that with the right mix of rhetoric about duty and financial incentives, you can get people to do almost anything including join an organization where they will be professionally shot at.
Picture burnt-out veterans of the Af-Pak drone wars haunting the shells of long-deserted strip-malls, sporting rusty cranial jacks for which no one makes the proprietary plugs or software any longer… you can probably torrent some cracked warez that’ll run on your ageing wetware, but who knows what else is gonna be zipped into that self-installing .deb?
It is easy to envision these uncanny lapses between classes occurring when we start fusing bodies with machines, because to imply that our bodies can easily be obsolete machines threatens a certain humanist concept of our bodies as a unifying quality to our species. But we don’t have to start invading the body to find differences that affect our ability to stratify ourselves into classes. If the equilibriums of the relations of production can develop a rift between first and third world without personal technology, between upper class and lower class both before, and as we start to use computers to identify ourselves as class member, why would one not also occur between “cutting-edge” and “deprecated” classes as technology becomes more “personal”–magnetizing that one kernel social structure not yet susceptible to fracture and evolution? At what point will our devices themselves reinforce the equilibriums of choice they themselves provide, by being the motive force for separating individuals into groups? If not by lasting only as long as their minimal service contracts in a planned obsolesce that intensifies the slope of device turnover, then by active means? An app only for the iPhone 8, that can detect models of the iPhone 5 and below–letting you know that you’ve wandered into an area with a “less than savory technological element?” When will emergency services only guarantee that they can respond to data transponder calls, and not voice requests? The local watchman has been phased out, in favor of centrally dispatched patrols that require phones to access. Isn’t it only a matter of time before central dispatch is phased out for distributed drone network policing? The ability to use a computer is a requirement for many jobs. When will the ability to data uplink hands-free be a requirement?
The last time I remember encountering the word “coprocessor” was when my father bought himself a 486DX system with all the bells and whistles, some time back in the nineties. Now it’s doing the rounds in this widely-linked Technology Review article about brain-function bolt-ons; it’s a fairly serious examination of the possibilities of augmenting our mind-meat with technology, and well worth a read. Here’s a snippet:
Given the ever-increasing number of brain readout and control technologies available, a generalized brain coprocessor architecture could be enabled by defining common interfaces governing how component technologies talk to one another, as well as an “operating system” that defines how the overall system works as a unified whole–analogous to the way personal computers govern the interaction of their component hard drives, memories, processors, and displays. Such a brain coprocessor platform could facilitate innovation by enabling neuroengineers to focus on neural prosthetics at an algorithmic level, much as a computer programmer can work on a computer at a conceptual level without having to plan the fate of every individual bit. In addition, if new technologies come along, e.g., a new kind of neural recording technology, they could be incorporated into a system, and in principle rapidly coupled to existing computation and perturbation methods, without requiring the heavy readaptation of those other components.
Of course, the idea of a brain OS brings with it the inevitability of competing OSs in the marketplace… including a widely-used commercial product that needs patching once a week so that dodgy urban billboards can’t trojan your cerebellum and turn you into an unwitting evangelist for under-the-counter medicines and fake watches, an increasingly-popular slick-looking solution with a price-tag (and aspirational marketing) to match, and a plethora of forked open-source systems whose proponents can’t understand why their geeky obsession with being able to adjust the tiniest settings effectively excludes the wider audience they’d love to reach. Those “I’m a Mac / I’m a PC” ads will get a whole new lease of remixed and self-referential life…
Presenting the fact and fiction of tomorrow since 2001