Tag Archives: algorithms

Econopocalypse scenario #3654: the Fat-finger Collapse

Ars Technica has an interesting article about a couple of recent stock-market glitches caused by high-frequency trading algorithms run amok. Long story short: a screw-up at Credit Suisse was caused by “a trader who accidentally double-clicked an icon in a trading program’s interface, when he should’ve single-clicked.Yipes.

OK, so it’s not quite the same as a tired technician leaning on the nuclear launch button by accident, but given the utter dependence we have on the instruments of high-speed high finance, similar mistakes could cause global catastrophes. [image by Coffee Maker]

The problem is connected to so-called “day-traders”, computer-assisted stock deals that occur in the blink of an eye, often without much human interaction, and minor errors are amplified at the speed of light (or at least the speed of data in optical fibers) by the networks, causing fluxes that folk like you and I never notice, but which cost bankers and investors thousands of dollars in losses and fines…

Of course, the fact that such computer-driven volatility hurts day traders matters little to long-term investors. But the fear is that these glitches are fleeting indications that the system as a whole is vulnerable and unstable, and that the right combination of circumstances could cause what happend to RMBS to happen on a wider scale. This is especially true as even more of the trading activity, even among individual traders, shifts to automated platforms.

However, it’s not all doom and gloom; the last few years have seen a sharp increase in small trading firms of the two-guys-and-a-fast-computer type, small independent operators using the same techniques as the big banks to trade automatically through the blind of commercially-available trading software.

The Obama administration’s efforts to rein in high-frequency trading by eliminating flash orders and banning proprietary trading (much of which is HFT-based) from large banks will probably have the effect of leveling the playing field a bit for these smaller algo shops. As Matthew Goldstein at points out in his Reuters article on the topic, the prop desks may disappear, but the software and expertise will not. Instead of being concentrated at a few large banks, algo trading will just spread, as the talent behind it either jumps to new funds or goes solo.

Once again, the network corrodes hegemony… but whether a world where anyone and his dog can engage in automated high-frequency wheeler-dealing will be a safer, better and richer one remains to be seen.

Stoned neural networks, wet computers and audio Darwinsim

Here’s a handful of links from the weird and wonderful world of computer science…

First of all, Telepathic-critterdrug is described as “a controversial fork of the open source artificial-life sim Critterding, a physics sandbox where blocky creatures evolve neural nets in a survival contest. What we’ve done is to give these animals an extra retina which is shared with the whole population. It’s extended through time like a movie and they can write to it for communication or pleasure. Since this introduces the possibility of the creation of art, we decided to give them a selection of narcotics, stimulants and psychedelics. This is not in Critterding. The end result is a high-color cellular automaton running on a substrate that thinks and evolves, and may actually produce hallucinations in the user.

You can download your own copy of this bizarre experiment to play with. Quite what it’s supposed to achieve (other than entertaining its creators) I’m not entirely sure… but then again, that’s what we tend to think about the reality we inhabit, so maybe there’s some sort of simulation-theory microcosm metaphor that could be applied here, eh?

Next up, wetware is about to make the transition from science fictional neologism to genuine branch of technological research; boffins at the University of Southampton are hosting an international collaboration aimed at making a chemical computer based on biological principles [via SlashDot].

The goal is not to make a better computer than conventional ones, said project collaborator Klaus-Peter Zauner […] but rather to be able to compute in new environments.

“The type of wet information technology we are working towards will not find its near-term application in running business software,” Dr Zauner told BBC News.

“But it will open up application domains where current IT does not offer any solutions – controlling molecular robots, fine-grained control of chemical assembly, and intelligent drugs that process the chemical signals of the human body and act according to the local biochemical state of the cell.

And last but not least, DarwinTunes is an experiment by two ICL professors to see whether they can use genetic algorithms to “evolve” enjoyable music from chaos, using the feedback of human listeners [via MetaFilter]. The DarwinTunes project website is sadly lacking a page that explains the project in a nutshell (or at least one that’s easily located by a first-time visitor), but a bit of poking around in the early blog entries should reveal the details. Or you can just listen to their 500th-generation riffs and loops from the project, which is still running.

The Google PageRank algorithm and extinction analysis

cod fishMost of us are familiar with the concept of the ecosystem – the idea that all living things are interconnected with (and interdependent on) one another and the environment they live in. Can you think of something beyond nature that behaves like an ecosystem?

If you answered “the internet”, then give yourself a cookie –  you had the same thought as a gang of biologists and ecologists who’ve just published a paper examining ways to use a computational algorithm – much like the one used by Google for calculating the search engine ranking of webpages – to determine which endangered species are most at risk, and which are most crucial to the survival of others.

In simple terms, PageRank rates the importance of websites and ranks them in a list compared to other websites. Sites with a higher ranking are those that are linked to more often by other sites and therefore have a greater number of connections.

Adapting this approach to ordering the web of connections within an ecosystem allows species to be ranked in importance by virtue of how many other species are linked to them.

One example of species that depend on each other are the overfished Atlantic cod (Gadus morhua) and other smaller animals that descend from it in the North Atlantic food chain. Because the predator has been depleted, species including smaller pelagic fish and northern snow crabs have boomed and are themselves depleting populations of phytoplankton and zooplankton.

It’s an innovative and useful tool, though other researchers are keen to underline its shortcomings:

Fraser Torpy, microbial ecologist and statistician from the University of Technology in Sydney, Australia, said the study is “very useful adjunct to our ability to determine what makes a species important in terms of its position in its ecological community”.

However, he cautions that the method may only work for simple food webs. “Whilst [this is] an innovative and genuinely useful novel technique for endangered species assessment, it must be remembered that the true complexity of real ecosystems cannot be overestimated.”

With the caveat that I’m no ecologist (nor, for that matter, a search algorithm expert), it occurs to me that as limited a method of modelling ecosystems this algorithm may be, its demonstrated ability to scale to the vast numbers of the web’s uncounted pages means it probably has the potential to outperform any other analytical method currently available. And as extinction rates increase in response to climate change and human intervention, maintaining the ecosystem that supports our own civilisation demands every tool we can get our hands on, regardless of how far short from perfect they might fall. [image by Hello, I am Bruce]

To what degree will computational algorithms be able to assist our understanding of natural systems? Where will their usefulness end… or will we eventually be able to reduce every system to equations, no matter how complex, once we have the necessary processing and memory resources available?

Spam-trap Turing tests train smarter software

Email and comment spam is one of those constant low-grade annoyances that simply becomes part of the furniture if you spend a lot of time on the ‘net, as are the CAPTCHA puzzles you have to take to prove you’re a human. [image from Wikimedia Commons]

Signs are that they won’t be much use much longer, though; a UK researcher has been using the ‘twisted letters’ type of CAPTCHA to train his visual recognition algorithms, while a chap at Palo Alto has a program that can correctly identify cats and dogs 83% of the time – which, lets face it, is probably a better success rate than the average YouTube user can manage.

Sadly, training algorithms against Turing test spam-traps is no more likely to produce a recognisably intelligent piece of software than the Loebner Artificial Intelligence Prize is. But maybe one day we’ll be able to combine all the pieces… if they don’t beat us to it and combine themselves, of course. 😉