Eric Drexler has written a paper entitled Biological and Nanomechanical Systems: Contrasts in Evolutionary Capacity that explores the differences between biological organisms and artificial machines, specifically why some products of intelligent design (i.e. design by humans) could never be created by natural selection. Drexler has written a short preface summarising his argument here:
The basic argument is as follows:
- Evolvable systems must be able, with some regularity, to tolerate (and occasionally benefit from) significant, incremental uncoordinated structural changes.This is a stringent contraint because, in an evolutionary context, “tolerate” means that they must function — and remain competitive — after each such change.
- Biological systems must satify this condition, and how they do so has pervasive and sometimes surprising consequences for how they are organized and how they develop.
- Designed systems need not (and generally do not) satify this condition, and this permits them to change more freely (evolving in a non-biological sense), through design. In a design process, structural changes can be widespread and coordinated, and intermediate designs can be fertile as concepts, even if they do not work well as physical systems.
As I read it (and I could be wrong) the basic notion underlying Drexler’s argument is that the kind of mechanical precision demanded by human engineers is not present in the products of natural evolution. Artificial technologies are not yet fungible. If you remove any part of your CPU it will not work. If you remove some parts of someone’s brain then it still works. If you make a small alteration to an organism’s genome it may still work.
In order for evolution to work the replicator needs to function even when it has some small mutation. Artificial technologies generally don’t work when there is some small error in the manufacturing process.
[from Eric Drexler on Metamodern][image from bbjee on flickr]
I’m not a computer scientist, but I have noticed that: (1) some computer programs (especially programs running on an interpreter rather than compiled code) can function to a large extent, even after sections of their source code are randomly removed (try it, its fun!), and (2) my computer can also function to a large extent, even with some of its hardware broken (e.g., defective sections on the disk drive, a failed internal cable to a port, etc.) In summary, there is *some* fault tolerance there.
“If you remove any part of your CPU it will not work. If you remove some parts of someone’s brain then it still works. If you make a small alteration to an organism’s genome it may still work.”
That depends on which part you remove and what you admit as a satisfactory definition of “working”.
Proponents of entirely randomized evolutionary development do seem sometimes to be doing a little surreptitious goal-moving: anything that doesn’t either kill the organism or make reproduction impossible is considered within “tolerance” limits for organic systems, but anything short of perfect function for human-designed systems and machines is considered unacceptable — because the mechanical systems have a known purpose, whereas the organic ones don’t (that is allowed admittance to the discussion, anyway). An argument that only works because one of the compared groups is held to a different standard than the other one has some problems, in my view.