It’s late in the day in my time zone, but maybe not too late to celebrate women’s contribution to technology.
Today has been declared Ada Lovelace Day, a celebration of women in technology named after the first computer programmer. Born Augusta Ada Byron—yes, that Byron—she was schooled in mathematics at her mother’s insistence and, as Wikipedia says, her “interest in mathematics dominated her life even after her marriage.” (OMG NO WAY. ::facepalm::)
And here’s a list of inspirations, including none other than Xeni Jardin.
Feministing lists some notable achievers, too.
[Portrait of the lady, Wikimedia Commons]
To brighten your Monday morning, here’s some speculation on robot morality – though not one of the usual sources. Nick Carr bounces off a Times Online story about a report from the US Office of Naval Research which “strongly warns the US military against complacency or shortcuts as military robot designers engage in the ‘rush to market’ and the pace of advances in artificial intelligence is increased.”
Carr digs into the text of the report itself [pdf], which demonstrates a caution somewhat at odds with the usual media image of the military-industrial complex:
Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.
The report goes on to consider potential training methods, and suggests that some sort of ‘moral programming’ might be the only way to ensure that our artificial warriors don’t run amok when exposed to the unpredictable scenario of a real conflict. Perhaps Carr is a science fiction reader, because he’s thinking beyond the obvious answers:
Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?
It’s a tricky question; essentially the military want to have their cake and eat it, replacing fallible meat-soldiers with reliable mechanical replacements that can do all the clever stuff without any of the attendant emotional trickiness that the ability to do clever stuff includes as part of the bargain. [image by Dinora Lujan]
I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars – in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?
Boeing’s new 787 Dreamliner, scheduled to be delivered to customers in 2010, requires about 6.5 million lines of software code to operate its avionics and onboard support systems.
Alfred Katzenbach, the director of information technology management at Daimler, has reportedly said that the radio and navigation system in the current S-class Mercedes-Benz requires over 20 million lines of code alone and that the car contains nearly as many ECUs as the new Airbus A380 (excluding the plane’s in-flight entertainment system).
There is a considerably more awesome car than an S-class described in Heavy Weather by Bruce Sterling, but I can’t find my copy to tell you how many lines of code that needed (I remember it was specified somewhere).