Life needs light, right? Without a parent star, a planet stands little chance of developing the conditions under which comples chemistry can bootstrap itself into biological processes.
So goes the conventional wisdom, at any rate, but here’s a paper by two space boffins from the University of Chicago that posits the possibility of “Steppenwolf planets”, roaming the vast tracts of interstellar space with no star to call their own, but of sufficient mass and composition to harbour subsurface oceans heated by the still-active planetary core.
Technovelgy compares this to an old George RR Martin story with which I’m not familiar, but I seem to remember a more recent precedent in the latest Greg Egan collection, though the title of the story eludes me.And then there’s Peter Watts’ Blindsight… can anyone think of any others?
It occurs to me that, short of technological developments of a science fictional scale, the only real use we’ll ever be able to put these hypothetical Steppenwolf planets to would be… well, the settings for science fiction stories, basically. Oh, the irony!
But hey, lookit – I managed to write the whole post without a single “born to be wild” gag!
Via George Dvorsky, Popular Science ponders the possibility of military robots going rogue:
We are surprisingly far along in this radical reordering of the military’s ranks, yet neither the U.S. nor any other country has fashioned anything like a robot doctrine or even a clear policy on military machines. As quickly as countries build these systems, they want to deploy them, says Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in England: “There’s been absolutely no international discussion. It’s all going forward without anyone talking to one another.” In his recent book Wired for War: The Robotics Revolution and Conflict in the 21st Century, Brookings Institution fellow P.W. Singer argues that robots and remotely operated weapons are transforming wars and the wider world in much the way gunpowder, mechanization and the atomic bomb did in previous generations. But Singer sees significant differences as well. “We’re experiencing Moore’s Law,” he told me, citing the axiom that computer processing power will double every two years, “but we haven’t got past Murphy’s Law.” Robots will come to possess far greater intelligence, with more ability to reason and self- adapt, and they will also of course acquire ever greater destructive power.
It turns out that it’s easier to design intelligent robots with greater independence than it is to prove that they will always operate safely. The “Technology Horizons” report emphasizes “the relative ease with which autonomous systems can be developed, in contrast to the burden of developing V&V [verification and validation] measures,” and the document affirms that “developing methods for establishing ‘certifiable trust in autonomous systems’ is the single greatest technical barrier that must be overcome to obtain the capability advantages that are achievable by increasing use of autonomous systems.” Ground and flight tests are one method of showing that machines work correctly, but they are expensive and extremely limited in the variables they can check. Software simulations can run through a vast number of scenarios cheaply, but there is no way to know for sure how the literal-minded machine will react when on missions in the messy real world. Daniel Thompson, the technical adviser to the Control Sciences Division at the Air Force research lab, told me that as machine autonomy evolves from autopilot to adaptive flight control and all the way to advanced learning systems, certifying that machines are doing what they’re supposed to becomes much more difficult. “We still need to develop the tools that would allow us to handle this exponential growth,” he says. “What we’re talking about here are things that are very complex.”
Of course, the easiest way to avoid rogue killer robots would be to build less of them.