All posts by Paul Raven

This is my genome. There are many others like it, but this one is mine.

With the increasing difficulty of getting people to actually sign up for military service in the first place, you’d think the Pentagon would make more of an effort to not treat its soldiery as disposable meatbags. Or at least I’d think that… which is one more reason to add to the list of reasons that I’m not a five-star general, I guess.

Aaaaaanyway, here’s the skinny on a Pentagon report that recommends the Department of Defense get some more mileage out of their human resources by collecting and sequencing the DNA of their soldiers en masse [via grinding.be]:

According to the report, the Department of Defense (DoD) and the Veteran’s Administration (VA) “may be uniquely positioned to make great advances in this space. DoD has a large population of possible participants that can provide quality information on phenotype and the necessary DNA samples. The VA has enormous reach-back potential, wherein archived medical records and DNA samples could allow immediate longitudinal studies to be conducted.”

Specifically, the report recommends that the Pentagon begin collecting sequencing soldiers’ DNA for “diagnostic and predictive applications.” It recommends that the military begin seeking correlations between soldiers’ genotypes and phenotypes (outward characteristics) “of relevance to the military” in order to correlate the two. And the report says — without offering details — that both “offensive and defensive military operations” could be affected.

That HuffPo piece leads off with the privacy angle, and wanders onto the more interesting (if potentially nasty) territory of promotional assessment based on genetic factors – a little like like a version of Gattaca where your perfection entitles you to use bigger and better guns. (Or, if you’re lucky, a job in the generals’ tent instead of the trenches.) More interesting still is the news that the DoD already has over 3 million DNA samples on file…

HuffPo being HuffPo, the piece ends with a blustering condemnation of the report:

Soldiers, having signed away many of their rights upon enlistment, should not be used for research that would not otherwise comport with our values, just because they are conveniently available.

Our enormous military establishment is a whole world unto itself, and there is no good reason why that world should depart from the standards that Congress so definitively banned in the rest of the employment world. Congress should prohibit the military from spending money on sequencing individual soldiers’ genomes (without individualized medical or forensic cause) or carrying out large-scale research on soldiers’ DNA.

Yeah, good luck with that. Frankly, I’d have thought a cheaper and more effective option for selecting the optimum soldierly phenotypes would be taking a more honest approach at the recruitment screening phase…

Rebellious robots: how likely is the Terminator scenario?

Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:

We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.

[…]

It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”

Of course, the easiest way to avoid rogue killer robots would be to build less of them.

*tumbleweed*

[$mind]!=[$computer]: why uploading your brain probably won’t happen

Via Science Not Fiction, here’s one Timothy B Lee taking down that cornerstone of Singularitarianism, the uploading of minds to digital substrates. How can we hope to reverse-engineer something that wasn’t engineered in the first place?

You can’t emulate a natural system because natural systems don’t have designers, and therefore weren’t built to conform to any particular mathematical model. Modeling natural systems is much more difficult—indeed, so difficult that we use a different word, “simulation” to describe the process. Creating a simulation of a natural system inherently means means making judgment calls about which aspects of a physical system are the most important. And because there’s no underlying blueprint, these guesses are never perfect: it will always be necessary to leave out some details that affect the behavior of the overall system, which means that simulations are never more than approximately right. Weather simulations, for example, are never going to be able to predict precisely where each raindrop will fall, they only predict general large-scale trends, and only for a limited period of time. This is different than an emulator, which (if implemented well) can be expected to behave exactly like the system it is emulating, for as long as you care to run it.

Hanson’s fundamental mistake is to treat the brain like a human-designed system we could conceivably reverse-engineer rather than a natural system we can only simulate. We may have relatively good models for the operation of nerves, but these models are simplifications, and therefore they will differ in subtle ways from the operation of actual nerves. And these subtle micro-level inaccuracies will snowball into large-scale errors when we try to simulate an entire brain, in precisely the same way that small micro-level imperfections in weather models accumulate to make accurate long-range forecasting inaccurate.

As discussed before, I rather think that mind simulation – much like its related discipline, general artificial intelligence – is one of those things whose possibility will only be resolved by its achievement (or lack thereof). Which, come to think of it, might explain the somewhat theological flavour of the discourse around it…

Hacker’s report says cyberwar fears misdirected

Not that I expect governments and military bureaucracies to change course in response to sensible thinking from qualified experts, the guy who penned (or rather keyed) The Hacker’s Handbook back in the day has co-authored a report that suggests the recently fashionable wing-flapping over “cyberwar” is counterproductive:

Published today, Reducing Systemic Cybersecurity Risk says that a true cyberwar would have the destructive effects of conventional war but be fought exclusively in cyberspace – and as such is a “highly unlikely” occurrence.

Cybersecurity is important because it protects all categories of data from theft and damage. This includes sensitive data, personally identifiable information (PII), protected health information (PHI), personal information, intellectual property, data, and governmental and industry information systems. To hire experts, we recommend to check https://www.sapphire.net/.

[…]

Controversially, the OECD advises nations against adopting the Pentagon’s idea of setting up a military division – as it has under the auspices of the US air force’s Space Command – to fight cyber-security threats. While vested interests may want to see taxpayers’ money spent on such ventures, says Sommer, the military can only defend its own networks, not the private-sector critical networks we all depend on for gas, water, electricity and banking.

Co-authored with computer scientist Ian Brown of the Oxford Internet Institute, UK, the report says online attacks are unlikely ever to have global significance on the scale of, say, a disease pandemic or a run on the banks. But they say “localised misery and loss” could be caused by a successful attack on the internet’s routing structure, which governments must ensure are defended with investment in cyber-security training.

Personally, I think the Pentagon’s bluster and chest-thumping over “cyberwar” is thrown into an interesting light by the increasingly inescapable conclusion that they played a large part in commissioning the Stuxnet worm; as Chairman Bruce puts it, “what’s worse, strategically: Stuxnet, or proliferating Iranian nuclear weapons? How about a world where you’ve got proliferating Stuxnets AND proliferating Iranian nuclear weapons?”

Pandora’s box strikes again; code is far easier and cheaper to reverse engineer than a nuke, and requires no expensive and/or dangerous physical contraband. Beware of starting a knife-fight in a downtown full of ninjas.

36 years of weird: Boston’s Sci-Fi Film Festival

Love science fiction cinema? Live near Boston? Well, lucky you! Read this press release:

Although the final schedule for 2011’s Boston Sci-Fi Film Festival has not yet been announced, festival director Garen Daly has already noticed a jump in ticket sales. The festival, which began at the Orson Welles Cinemas, began as a 24 hour science fiction retrospective in 1976 and now stretches ten days, taking place at the Somerville Theater.

Like many other festivals, the Boston Sci-Fi Film Festival uses OpenFilm to collect submissions. The deadline this year is January 31st, but Assistant Curator Liz Pratt maintains that she and Daly will have plenty of time to finalize the selections. “We want to make sure we can receive as many submissions as possible,” she explained, “because this festival is a great jumping-off point for young directors and lower-budget films. And since we have so many hours to fill with the ‘Thon, we can always find room to fit in something great that we’ve found at the last minute.”

The festival has announced two official selections so far, the first being the original Battlestar Galactica. Daly has also acquired an extremely rare print of 2,000 Leagues Under the Sea, dated 1916/17 and directed by Stuart Patton. The film has not been shown in Boston since the 1920s and will be a one-in-a-lifetime chance for serious film fans. Director David Fincher has just announced he will be remaking the Jules Verne classic.

The tradition of the 24 hour ‘Thon, as it affectionately became to be called, remains. Although Daly admits that “sharing one room for 24 hours will do strange things,” loyal festival goers are expected to arrive from around the country to indulge in a marathon of all things science fiction. Many of them have attended every year of the festival, which includes feature films as well as animation, vintage movie trailers, and other unannounced surprises.

What does Daly say is the most important thing for attendees to remember? “As we like to say, we’re old enough to know better but young enough to stay up.” He continued, “Bring extra deodorant, mouthwash and a change of socks. We also suggest you bring some eye drops and your sense of awe.”

About the Boston Sci-Fi Film Festival…

The 2011 Boston Sci-Fi Film Festival describes itself as “the oldest genre film festival in the country (we think.)” The festival will be held from February 11 to February 21 at the Somerville Theater, 55 Davis Square, Somerville MA. The ‘Thon will begin on Sunday, February 20 and noon and will continue for 24 hours. Tickets and passes can be purchased at www.bostonsci-fi.com or at the theater. Friend them on Facebook or follow news and updates on the website.