What happens to the internet if there’s a viral pandemic?

Map of the internetOur beloved internet could suffer badly at the hands of a pandemic virus. And not just computer viruses, either: a pandemic attack of an illness like swine flu might have knock-on effects in the digital domain, and the US General Accountability Office isn’t pleased that no one appears to making any contingency plans [via SlashDot]:

… the Homeland Security Department accused the GAO of having unrealistic expectations of how the Internet could be managed if millions began to telework from home at the same time as bored or sick schoolchildren were playing online, sucking up valuable bandwidth.

Experts have for years pointed to the potential problem of Internet access during a severe pandemic, which would be a unique kind of emergency. It would be global, affecting many areas at once, and would last for weeks or months, unlike a disaster such as a hurricane or earthquake.

H1N1 swine flu has been declared a pandemic but is considered a moderate one. Health experts say a worse one — or a worsening of this one — could result in 40 percent absentee rates at work and school at any given time and closed offices, transportation links and other gathering places.

And what do you do if you’re stuck home from work or school under house quarantine? You fire up your computer and mess around on the internet (unless that’s just me), meaning a severe pandemic will cause a serious uptick in bandwidth demand, potentially slowing down essential infrastructure systems at the same time. [image by matthewjethall]

In a rare display of pragmatism, Homeland Security has told the GAO that there’s not really much it can do to prepare for this sort of eventuality – despite theories to the contrary, the internet is not a series of tubes. Commercial ISPs are unlikely to be keen on being told to lock down the connections of their customers, either.

Homeland Security might well have spent a moment to think about the psyops angle of such a move, as even the positive practical results of restricting consumer bandwidth might be seriously outweighed by the psychological negatives. Ill internet habitués might break quarantine to go to locations where the connection was faster; the state of fear and paranoia that attends a serious pandemic might be amplified by the perceived restriction of information channels (“what are they trying to hide?”)… having technological impossibility as a scapegoat is probably something of a relief.

The good news, however, is that most of the major securities exchanges and financial institutions have their own private networks that don’t rely on publicly-available bandwidth, so we can rest easy in the knowledge that, even when we’re stuck at home sweating out a nasty virus without so much as a bit-rate that’ll let us peer at Fark every ten minutes, greedy shysters in expensive suits will still be able to skim the cream from the global misery without any inconvenience.

Frankly, I’m not sure that the issues would be as big as is being suggested; schools and businesses surely contribute significantly toward bandwidth consumption during the daytime, so there’d be some slack to take up thanks to absenteeism. The whole thing has a slight smell of Millennium Bug about it, at least for me; if there’s a networking expert in the audience, I’d appreciate being set straight on the details.

And while we’re talking about the internet, good old DARPA – who invented the thing in the first place – are trying to work out how to extend it into orbit and link up our swarm of satellites to their own broadband connections. That’s easy enough (though still slow) when you can set up a persistent link from ground to orbit with a geostationary platform, but not so simple for sats that move relative to the Earth’s surface. If you’ve got an idea of how to get around the problem, DARPA would like to hear from you before 5th November…

… but in the meantime, would anyone like to open a book on how soon a military or commercial satellite will be hacked over its own broadband connection?

The Facebook graveyard

metaverse tombstoneThis week’s big social network story is Facebook’s announcement that they now allow the user profiles of people who’ve died to be “memorialised” – frozen in perpetuity (one presumes) so that you can still visit them, like some digital tombstone or memorial bench. [image by moggs oceanlane]

As that blog post makes clear, Facebook are obviously reacting to a genuine human need, though it would be easy (if highly cynical) to suggest that they’d like a slice of the growing traffic for online memorial sites, as mentioned earlier this month. But questions remain – who will maintain these profiles, and for how long? Will they drop out of the system when their last friend connection is severed (assuming, of course, that any social network platform lasts long enough for that to happen)? Will those memorial profiles be any more or less transferrable to new networks than those of the living? Will the memorials still have adverts surrounding them, like a normal Facebook page, and is that a morally acceptable price for their maintenance?

The biggest question is obviously “how much checking will Facebook do to ensure that the person really is dead?” Internet “pseuicides” aren’t a new phenomenon, and it’s not clear whether the Facebook crew have a procedure in mind to prevent a group of friends faking a death, be it in collusion with the owner of the profile in question or otherwise. Given how easy it is to hack many people’s public email accounts (poor password choices, easily reverse-engineered ‘secret questions’, etc.), unless they demand some sort of legal confirmation from the state that the owner has indeed passed away it could be a relatively simple scam to declare a living person to be deceased. Indeed, that might even become a popular black economy service, alongside fake IDs and new identities.

Head-mounted augmented reality computers: the budget hack versus the bespoke device

One of the more interesting things about the hardware hacking scene is comparing the results of different methodologies. Some folk prefer to develop gadgets that are as close to production-grade products as possible, while others are more focussed on the low-budget proof-of-concept kludge… and this week has seen examples of both approaches as applied to augmented reality visor-computers.

First up, the craftsman approach. Pascal Brisset was frustrated with wearable computer cooncepts that relied on some sort of back- or belt-mounted processor unit to drive the headset, so he decided to build the whole system onto an off-the-shelf visor VDU [via Hack A Day]. As you can see, the results are pretty compact:

Pascal Brisset's wxhmd wearable computer

It runs on Linux, too, but that probably went without saying. Of course, it’s just a proof-of-concept rather than something that Brisset could start building for consumers. As he states in his documentation disclaimers:

The systems draws 1 A with no power optimizations. This is acceptable since nobody would want to spend more than a few minutes with two pulsed microwave RF transmitters, an overheating lithium battery and eye-straining optics strapped to their forehead anyway.

Quite.

Meanwhile, down at the other end of the brain-farm, Andrew Lim built himself a backyard VR helmet using nothing more than an HTC Magic handset and a few dollars worth of other gubbins [also via Hack A Day]. It’s quite obviously a much more lo-fi affair than Brisset’s contraption:

It does have a certain goofy charm, doesn’t it? But again, hardly the sort of thing you’d try selling for any practical purpose whatsoever – the point being proven here is that augmented reality (and other similar emerging technologies) are not necessarily the exclusive domain of big corporations or slick new start-ups; where there’s a will (and some ingenuity), there’s a way. Or, as I often end up saying here, Everything Can (And Will) Be Hacked.

Personally, I find that reassuring, because the battle for direct access to our retinas is just starting to heat up. The big tech corporations can see there’s money to be made with wearable tech in the very near future, and they’re preparing to roll out the hardware as soon as next year (if press releases are to be trusted, which they quite possibly aren’t)… and as Jan Chipchase pointed out, the way they’ll make the stuff affordable to you is by co-opting with companies who’re desperate for the direct pipeline to your brainmeat that said hardware will provide. They need to ride that augmented reality hype curve, after all – at least until it reaches the trough of disillusionment.

The geek finds its own use for things: Google Wave RPGs

Google Wave logoA few weeks back, all the major tech blogs were saying “well, Google Wave seems pretty neat, but we’re not really sure what it’s for”. Google themselves surely had a number of potential applications in mind, but whether using Wave as a platform for roleplaying games was one of them remains an unknown quantity. (It’s surely a cheaper option than that touchscreen table mod, though.)

The waves are persistent, accessible to anyone who’s added to them, and include the ability to track changes, so they ultimately work quite well as a medium for the non-tactical parts of an RPG. A newcomer can jump right in and get up-to-speed on past interactions, and a GM or industrious player can constantly maintain the official record of play by going back and fixing errors, formatting text, adding and deleting material, and reorganizing posts. Character generation seems to work quite well in Wave, since players can develop the shared character sheet at their own pace with periodic feedback from the GM.

Unfortunately for those of us who are more into the tactical side of RPGs, it isn’t yet well-suited to a game that involves either a lot of dice rolling or careful tracking of player and NPC positions. Right now, Wave bots are hard to get working reliably and widgets are scarce, which means that if you don’t want to use the standard dice bot that Wave debuted with (dice bots are an old IRC favorite) then there isn’t really another convenient option; rolls are either made with real dice and then posted on the honor system, or they’re posted in batches and a GM then uses them in sequence.

In truth, this probably isn’t all that big a surprise – from IRC and email onwards, pretty much every internet communications format has been bent to the whims of gamer geeks. But it highlights a fundamental difference in the way people approach a new technology: a journalist goes in thinking “what is this meant to do?”, but the true digital native goes in thinking “what can this do for me?”.

Both questions are valuable, of course, but I suspect that it’s the increased penetration of the latter mindset that ensures I get the bulk of my news and opinion journalism online. Whether the difference in underlying philosophies that those questions represent is a function of network architecture or a cause of it remains, naturally, an unanswerable (but greatly entertaining) point for debate… maybe we could start a Wave for that? 😉

http://arstechnica.com/gaming/news/2009/10/google-wave-we-came-we-saw-we-played-dd.ars

Big up Matt Staggs, who i believe suggested this a few weeks back.

Natural nuclear reactors

My magical statistics monkeys tell me that last week’s post on dissociative fugues was surprisingly popular, so I thought I’d share another article I found fascinating. Yet another hat-tip to Geoff Manaugh at BLDGBLOG for this one; it’s a Scientific American report on naturally occuring nuclear reactors. Yes, you read that right – nuclear power plants that just happened by geological chance.

More than two tons of this plutonium isotope were generated within the Oklo deposit. Although almost all this material, which has a 24,000-year halflife, has since disappeared (primarily through natural radioactive decay), some of the plutonium itself underwent fission, as attested by the presence of its characteristic fission products. The abundance of those lighter elements allowed scientists to deduce that fission reactions must have gone on for hundreds of thousands of years. From the amount of uranium 235 consumed, they calculated the total energy released, 15,000 megawatt-years, and from this and other evidence were able to work out the average power output, which was probably less than 100 kilowatts—say, enough to run a few dozen toasters.

(Or a few dozen highly-efficient computers, perhaps?)

It is truly amazing that more than a dozen natural reactors spontaneously sprang into existence and that they managed to maintain a modest power output for perhaps a few hundred millennia. Why is it that these parts of the deposit did not explode and destroy themselves right after nuclear chain reactions began? What mechanism provided the necessary self-regulation? Did these reactors run steadily or in fits and starts?

Go read the whole thing; the science isn’t too heavy, and it’s a pretty wild idea. I’m pretty sure I’ve read about something similar in a Stephen Baxter novel (though I can’t for the life of me remember which one); at the time I assumed he was speculating in a vacuum, but I guess I should have known better. 🙂

Regarding the popularity of the dissociative fugues post, I’ve been wondering whether perhaps I should be spending more time linking to interesting stuff and less time waffling around on tangents? It’s you guys who read this stuff, so what would you like to see here – more random points of interest, more speculative ramblings, or a blend of the two?