Offered without any comment whatsoever [via Kottke.org]:
Tag Archives: military
This is my genome. There are many others like it, but this one is mine.
With the increasing difficulty of getting people to actually sign up for military service in the first place, you’d think the Pentagon would make more of an effort to not treat its soldiery as disposable meatbags. Or at least I’d think that… which is one more reason to add to the list of reasons that I’m not a five-star general, I guess.
Aaaaaanyway, here’s the skinny on a Pentagon report that recommends the Department of Defense get some more mileage out of their human resources by collecting and sequencing the DNA of their soldiers en masse [via grinding.be]:
According to the report, the Department of Defense (DoD) and the Veteran’s Administration (VA) “may be uniquely positioned to make great advances in this space. DoD has a large population of possible participants that can provide quality information on phenotype and the necessary DNA samples. The VA has enormous reach-back potential, wherein archived medical records and DNA samples could allow immediate longitudinal studies to be conducted.”
Specifically, the report recommends that the Pentagon begin collecting sequencing soldiers’ DNA for “diagnostic and predictive applications.” It recommends that the military begin seeking correlations between soldiers’ genotypes and phenotypes (outward characteristics) “of relevance to the military” in order to correlate the two. And the report says — without offering details — that both “offensive and defensive military operations” could be affected.
That HuffPo piece leads off with the privacy angle, and wanders onto the more interesting (if potentially nasty) territory of promotional assessment based on genetic factors – a little like like a version of Gattaca where your perfection entitles you to use bigger and better guns. (Or, if you’re lucky, a job in the generals’ tent instead of the trenches.) More interesting still is the news that the DoD already has over 3 million DNA samples on file…
HuffPo being HuffPo, the piece ends with a blustering condemnation of the report:
Soldiers, having signed away many of their rights upon enlistment, should not be used for research that would not otherwise comport with our values, just because they are conveniently available.
Our enormous military establishment is a whole world unto itself, and there is no good reason why that world should depart from the standards that Congress so definitively banned in the rest of the employment world. Congress should prohibit the military from spending money on sequencing individual soldiers’ genomes (without individualized medical or forensic cause) or carrying out large-scale research on soldiers’ DNA.
Yeah, good luck with that. Frankly, I’d have thought a cheaper and more effective option for selecting the optimum soldierly phenotypes would be taking a more honest approach at the recruitment screening phase…
Rebellious robots: how likely is the Terminator scenario?
Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:
We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.
[…]
It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”
Of course, the easiest way to avoid rogue killer robots would be to build less of them.
*tumbleweed*
The trouble with drones
When military hardware and software IP disputes meet: via Slashdot we hear of a pending lawsuit that may ground the CIA’s favourite toys, the Predator drones. In a nutshell, a small software firm called IISi alleges that some of their proprietary software was pirated by another firm, Netezza, who then sold it on to a government client which was revealed by further presentations of evidence to be none other than the Central Intelligence Agency. Plenty of grim irony in there, even before you factor in the allegations from IISi that the hacked software may render the drone targeting systems inaccurate to the tune of plus-or-minus forty feet. So it’s not all bad news for the CIA: at least they can start blaming collateral damage on shoddy outsourcing.
In other drone news, Chairman Bruce draws our attention to Taiwan, whose ministry of defense confirms that it is developing UAV designs of its own. We can assume that, in the grand tradition of Taiwanese electronics products, these will be cheap-and-cheerful alternatives to the more respectable brands of the Western military-industrial complex, ideal for tin-pot totalitarians and networked non-geographical political entities working to tight budgets. Hell only knows where they’ll get the software from, though.
We can only hope
“… nuclear weapons may come to be seen as a strange fetishistic behavior by nations at a certain period in history. They were insanely expensive and thoroughly useless. Their only function was to keep a bizarre form of score.” – Richard Rhodes [link and video via Chairman Bruce]