Organic computing, anyone? Ars Technica reports on a paper published in Nature, wherein the authors describe the creation of bacterial colonies that can act as logic gates:
The key to the new work is stretches of DNA that act as logical OR and NOR functions. Both of them rely on small stretches of DNA called promoters that control the activity of nearby genes. In this case, the authors used promoters that activate nearby genes in response to simple chemicals (arabinose and tetracycline for these two promoters). By putting both promoters next to a reporter gene, the system acted as an OR gate: when either of the chemicals was present, the reporter was on.
… the authors set up small clusters of bacterial colonies (small lumps of genetically identical cells). Each colony had a single logic gate (the authors used NOR, OR, and NOT gates). Depending on the arrangement of the colonies, each one could signal to only one or two neighbors, and each could only take input from one or two. The authors demonstrated a functional XOR gate built from four colonies, showing that all logical functions can be built from similar combinations.
The nice thing about using populations of cells is that this averages out some of the chaotic behavior typical of systems based on single cells. At a minimum, the systems they tested showed a five-fold difference between their on and off states. The downside is that, relative to a single cell, these systems are huge. The authors suggest that it might be possible to adapt their system to single cells, but it’s not clear that the same sort of performance could be maintained.
Boole meets biology. Maybe one day we’ll grow computers instead of building them from silicon slices…
Here’s a proper science fictional “what-if?”, via the ever-reliable MetaFilter, where the brilliant slug-line “software for your wetware” was applied: is it possible to exploit the biological computation power of our visual apparatus to deal with tasks that we find difficult at a cognitive level? Or, to put it another way: can we set up the brain to act like a processor that uses complex visual stimuli as a form of program?
Or, even more simply: can we make diagrams that, when looked at, produce a certain computational output in our minds?
Yeah, I know, it sounds a bit crazy… but Mark Changizi sure comes across like he knows what he’s doing.
The broad strategy is to visually represent a computer program in such a way that, when one looks at the visual representation, one’s visual system naturally responds by carrying out the computation and generating a perception that encodes the appropriate output to the computation. That is, there would be a special kind of image that amounts to “visual software,” software our “visual hardware” (or brain) computes, and computes in such a way that the output can be “read off” the elicited perception.
Ideally, we would be able to glance at a complex visual stimulus—the program with inputs—and our visual system would automatically and effortlessly generate a perception that would inform us of the ouput of the computation. Visual stimuli like this would not only amount to a novel and useful visual notation, but would actually trick our visual systems into doing our work for us.
And the visual stimuli he’s on about [image borrowed from linked article]?
Well, that elicited a few cognitions from my brain… though I’m not sure that any of those cognitions are particularly useful.
The New York Times has a brief, appreciative item about Vernor Vinge and his novel Rainbows End. Here’s a nice if-this-goes-on snip:
“These people in ‘Rainbows End’ have the attention span of a butterfly,” [Vinge] said. “They’ll alight on a topic, use it in a particular way and then they’re on to something else. Right now people worry that we don’t have lifetime employment anymore. How extreme could that get? I could imagine a world where everything is piecework and the piece duration is less than a minute.”
Researchers at the Commerce Department’s Joint Quantum Institute (JQI) and the University of Maryland have used laser beams to produce less “noisy” images, according to Science Express via Science Daily. The experiment could lead to better computers and information-storage. The images are born in pairs, “like twins separated at birth,” at slightly different frequencies. None of that is necessarily weird, but:
Look at one quantum image, and it displays random and unpredictable changes over time. Look at the other image, and it exhibits very similar random fluctuations at the same time, even if the two images are far apart and unable to transmit information to one another. They are “entangled”–their properties are linked in such a way that they exist as a unit rather than individually.
The photo-montage of quantum cats is made from color-treated images used in the experiment. The lines suggest how entanglement occurs. What else could we do with quantum entanglement? It would be fun to make entangled drawings or paintings.
[Image: Vincent Boyer/JQI]
In light of the recent and tragic bridge collapse in Mississippi, mathematics uber-geek Stephen Wolfram has been doing some thinking about how evolutionary computing could be used to design stronger bridge structures. It looks like strength doesn’t always correlate to regularity of patterns. [BoingBoing]