Eric Drexler has written a paper entitled Biological and Nanomechanical Systems: Contrasts in Evolutionary Capacity that explores the differences between biological organisms and artificial machines, specifically why some products of intelligent design (i.e. design by humans) could never be created by natural selection. Drexler has written a short preface summarising his argument here:
The basic argument is as follows:
- Evolvable systems must be able, with some regularity, to tolerate (and occasionally benefit from) significant, incremental uncoordinated structural changes.This is a stringent contraint because, in an evolutionary context, “tolerate” means that they must function — and remain competitive — after each such change.
- Biological systems must satify this condition, and how they do so has pervasive and sometimes surprising consequences for how they are organized and how they develop.
- Designed systems need not (and generally do not) satify this condition, and this permits them to change more freely (evolving in a non-biological sense), through design. In a design process, structural changes can be widespread and coordinated, and intermediate designs can be fertile as concepts, even if they do not work well as physical systems.
As I read it (and I could be wrong) the basic notion underlying Drexler’s argument is that the kind of mechanical precision demanded by human engineers is not present in the products of natural evolution. Artificial technologies are not yet fungible. If you remove any part of your CPU it will not work. If you remove some parts of someone’s brain then it still works. If you make a small alteration to an organism’s genome it may still work.
In order for evolution to work the replicator needs to function even when it has some small mutation. Artificial technologies generally don’t work when there is some small error in the manufacturing process.
[from Eric Drexler on Metamodern][image from bbjee on flickr]
Following on from solar sails we have a discussion of that other science fictional bastion of propellantless propulsion – the space elevator – it turns out that space elevators and space tethers can be used for more than just getting into orbit:
A series of bolo tethers, each tether passing a spacecraft onto the next, could be used to achieve even larger orbit changes than a single system. For example, one tether system could catch a spacecraft from a very low orbit and swing it into a somewhat higher orbit. Another bolo picks it up from there and puts the satellite into a geosynchronous transfer orbit (GTO). A third tether catches the load again and imparts sufficient velocity to it so that it reaches escape velocity. A satellite initially orbiting just above the atmosphere could thus be slung all the way into an interplanetary orbit around the Sun, and all this without using any rocket propulsion and propellant…
This is in the context of a review by Centauri Dreams of Space Tethers and Space Elevators by Michel van Pelt, which explores tethers and space elevator concepts in some detail.
[from Centauri Dreams][image from Wikimedia and NASA]
Canadian company General Fusion are developing a fusion reactor that is based on a process called magnetized target fusion:
The reactor consists of a metal sphere with a diameter of three meters. Inside the sphere, a liquid mixture of lithium and lead spins to create a vortex with a vertical cavity in the center. Then, the researchers inject two donut-shaped plasma rings called spheromaks into the top and bottom of the vertical cavity – like “blowing smoke rings at each other,” explains Doug Richardson, chief executive of General Fusion.
The last step is mainly well-timed brute mechanical force. 220 pneumatically controlled pistons on the outer surface of the sphere are programmed to simultaneously ram the surface of the sphere one time per second. This force sends an acoustic wave through the spinning liquid that becomes a shock wave when it reaches the spheromaks in the center, triggering a fusion burst. …
General Fusion has just started developing simulations of the project, and hopes to build a test reactor and demonstrate net gain within five years. If everything goes according to plan, they will then build a 100-megawatt prototype reactor to be finished five years after that, which would cost an estimated $500 million.
Like general artificial intelligence, generative fusion power is one of those technologies that always seems to be 10-20 years in the future.
It is good to see alternative techniques to the well-known ITER project or Inertial Fusion Energy being adopted as it increases the chances that some genuinely practical approach will be found.
It’s also heartening to see (relatively) smaller operations engaging in generative fusion research.
[from Physorg][image from Physorg]
Researchers have developed an artificial cellular organelle to aid in the development of artificial synthesis the life-saving anti-clotting drug heparin:
Scientists have been working to create a synthetic version of the medication, because the current production method leaves it susceptible to contamination–in 2008, such an incident was responsible for killing scores of people. But the drug has proven incredibly difficult to create in a lab.
Much of the mystery of heparin production stems from the site of its natural synthesis: a cellular organelle called the Golgi apparatus, which processes and packages proteins for transport out of the cell, decorating the proteins with sugars to make glycoproteins. Precisely how it does this has eluded generations of scientists.
To better understand what was going on inside the Golgi, Linhardt and his colleagues decided to create their own version. The result: the first known artificial cell organelle, a small microfluidics chip that mimics some of the Golgi’s actions.
As well as the utility of being able to produce drugs in this way, it is impressive the degree of control that can be exerted over the matter:
The digital device allows the researchers to control the movement of a single microscopic droplet while they add enzymes and sugars, split droplets apart, and slowly build a molecule chain like heparin.
[from Technology Review, via KurzwailAI][image from Technology Review]
Japanese researchers are developing a means of storing data for periods of thousands of years, to help solve the problem of an imminent digital dark age:
The team, led by Professor Tadahiro Kuroda of Tokyo’s Keio University, has proposed storing data on semiconductor memory-chips made of what he describes as the most stable material on the Earth – silicon.
Tightly sealed, powered and read wirelessly, such a device, he claims, would yield its digital secrets even after 1000 years, making any stored information as resilient as it were set in stone itself.
It’s a realisation that moved the researchers to name the disc-like, 15in (38cm) wide device the “Digital Rosetta Stone” after the revolutionary 2,200-year-old Egyptian original unearthed by Napoleon’s army.
This is a very similar concept to the Long Now Foundation’s Rosetta Disk, which is intended to be a very-long-term record of contemporary languages.
It is encouraging to know this problem is being studied and so many groups are looking for solutions.
[from the BBC][image from bwhistler on flickr]