Machine-making machines making more machine-making machines…

Paul Raven @ 29-10-2009

RepRap 'Mendel' self-replicating machineVia Michael Anissimov we hear that the second generation of the RepRap self-replication machine, codenamed “Mendel”, is nearly ready for public release. Meaning that you could buy one (if you found someone who’d sell you one), but you could also build your own from the free open-source plans found at the RepRap website; the parts will cost you around US$650. [image from the RepRap wiki]

While such homebrew 3d printers aren’t currently much use for high-detail work and commercial finishes (like reproductions of your favourite World of Warcraft critters, maybe), they can make functional devices without any major problems. If there really is an increase in demand, you could probably assemble a Mendel and set it up to simple print a copy of itself. Then set up the copy to do the same, get a few generations of fully-functioning clones built, and then start churning ’em out and selling them to local buyers…

Integrating electrical and electronic circuits into physical parts is beyond these current home fabbers, but where big industry leads the homebrew crew will follow. Xerox has just invented a conductive silver ink that works without the need for a clean-room environment, meaning you can print off circuits onto a flexible substrate just like any other continuous-feed document. It wouldn’t take much for someone to buy some of that ink and find a way to use it in cheap and/or homebrew kit… hey presto, you’ve suddenly got the capability to replicate the electronic parts of a more complex self-replicating machine.

So go a bit further, integrate the two functions, have one machine that can print both inert blocks and electronics. Now we’re cooking! Now shrink ’em down, maybe speciate them so that different versions are specialised toward specific types of printing or assembly. But you’ll need to train them to pass off tasks they can’t do onto a machine that can, so you give them some sort of rudimentary swarm intelligence that communicates over something like Bluetooth… and then all of a sudden you’ve got an anthill of mechanical critters that have learned to procreate, cooperate, and deceive. DOOM.

Yeah, I know, it’s not very likely – but allow a guy a flight of robo-dystopian fancy on a Thursday, why don’t you? 🙂

Singularity lacking in motivation

Tom James @ 09-09-2009

motivationMIT neuroengineer Edward Boyden has been speculating as to whether the singularity requires the machine-equivalent of what humans call “motivation”:

I think that focusing solely on intelligence augmentation as the driver of the future is leaving out a critical part of the analysis–namely, the changes in motivation that might arise as intelligence amplifies. Call it the need for “machine leadership skills” or “machine philosophy”–without it, such a feedback loop might quickly sputter out.

We all know that intelligence, as commonly defined, isn’t enough to impact the world all by itself. The ability to pursue a goal doggedly against obstacles, ignoring the grimness of reality (sometimes even to the point of delusion–i.e., against intelligence), is also important.

This brings us back to another Larry Niven trope. In the Known Space series the Pak Protector species (sans spoilers) is superintelligent, but utterly dedicated to the goal of protecting their young. As such Protectors are incapable of long-term co-operation because individual protectors will always seek advantage only for their own gene-line. As such the Pak homeworld is in a state of permanent warfare.

This ties in with artificial intelligence: what good is being superintelligent if you aren’t motivated to do anything, or if you are motivated solely to one, specific task? This highlights one of the basic problems with rationality itself: Humean intrumental rationality implies that our intellect is always the slave of the passions, meaning that we use our intelligence to achieve our desires, which are predetermined and beyond our control.

But as economist Chris Dillow points out in this review of the book Animal Spirits, irrational behaviour can be valuable. Artists, inventors, entrepreneurs, and writers may create things with little rational hope of reward but – thankfully for the rest of society – they do it anyway.

And what if it turns out that any prospective superintelligent AIs wake up and work out that it isn’t worth ever trying to do anything, ever?

[via Slashdot, from Technology Review][image from spaceshipbeebe on flickr]

Are we alone?

Tom James @ 10-07-2009

saucersTranshumanist blogger George Dvorsky points to a debate between astrophysicist Brandon Carter and a team of Serbian researchers, the core of which revolves around how long complex (and intelligent) life takes to evolve:

Prior to ‘recent times’, universal mechanisms were in place to continually thwart the evolutionary development of intelligence, namely through gamma-ray bursts, super novae and other forms of nastiness. Occasional catastrophic events have been resetting the “astrobiological clock” of regions of the Galaxy causing biospheres to start over. “Earth may be rare in time, not in space,” they say. They also note that the rate of evolution is intimately connected with a planet’s environment, such as the kind of radiation its star emits.

For further discussion of our place in the universe see the Copernican Principle, which exhorts us to avoid assuming that humanity, Earth, and our place in the universe can be assumed to be unique and special.

Further the notion of punctuated equilibrium to describe evolution is interesting: might it be extended to describe other evolutionary phenomena? Eric Beinhocker‘s superb The Origin of Wealth describes both technology and the economy in terms of evolutionary systems, both of which experience a form of punctuated equilibrium.

[image from eek the cat on flickr]

Is dumping IQ a genius idea?

Paul Raven @ 02-02-2009

Albert EinsteinThe more we learn about the nature of our own intelligence, the more our definitions of it change… but we’re still fairly fixated on the old-fashioned IQ test as a metric for judging how smart someone is. [Einstein portrait courtesy Wikimedia Commons]

George Dvorsky reports on the ideas of one Keith E. Stanovich, who recommends we expand the concept of intelligence to encompass more functions than just number-crunching, spatial logic and the more recent addition of ’emotional intelligence’:

Stanovich suggests that IQ tests should be adjusted to focus on valuable qualities and capacities that are highly relevant to our daily lives. He argues that IQ tests would be far more effective if they took into account not only mental “brightness” but also rationality — including such abilities as “judicious decision making, efficient behavioral regulation, sensible goal prioritization … [and] the proper calibration of evidence.”

Sounds to me like we should start blanket testing for those latter traits at the doors of our seats of government…

Only the smart die young

Paul Raven @ 19-12-2008

You’d probably think that intelligence would be an asset in the modern battlefield, and hence the smart soldiers would be the ones to survive, right?

Well, as logical as that sounds it may not be the case: a study of records from Scottish army units from WW2 and from the education system about a decade before suggests that the average IQ of those who survived the war was lower than those who lost their lives.

« Previous PageNext Page »