Tag Archives: algorithms

The Anne Hathaway Guide to Stocks and Shares

Depending on how you look at it, this is either a harbinger of the emergent-model AI Singularity or a demonstration of the specious voodoo underpinnings of the automated financial markets… possibly both, if you’re a real pessimist [via Kottke.org].

A couple weeks ago, Huffington Post blogger Dan Mirvish noted a funny trend: when Anne Hathaway was in the news, Warren Buffett’s Berkshire Hathaway’s shares went up. He pointed to six dates going back to 2008 to show the correlation. Mirvish then suggested a mechanism to explain the trend: “automated, robotic trading programming are picking up the same chatter on the Internet about ‘Hathaway’ as the IMDb’s StarMeter, and they’re applying it to the stock market.”

[…]

Companies are trying to “correlate everything against everything,” [Bates] explained, and if they find something that they think will work time and again, they’ll try it out. The interesting, thing, though, is that it’s all statistics, removed from the real world. It’s not as if a hedge fund’s computers would spit the trading strategy as a sentence: “When Hathway news increases, buy Berkshire Hathaway.” In fact, traders won’t always know why their algorithms are doing what they’re doing. They just see that it’s found some correlation and it’s betting on Buffett’s company.

Now, generally the correlations are between some statistical indicator and a stock or industry. “Let’s say a new instrument comes to an exchange, you might suddenly notice that that an instrument moves in conjunction with the insurance sector,” Bates posited. But it’s thought that some hedge funds are testing strategies out to mine news and social media datasets for other types of correlations.

Crazy, right? Well, irrational on one level, perhaps, but those trading algos are big (bad) business: remember the guy who was accused of stealing some algo code from Goldman Sachs? Eight year stretch [via BoingBoing].

The Flash Crash and the trouble with transparency

A report at Ars Technica compares the computerised financial markets to a vast and infernally complex piece of multi-threaded software running on hardware that was never designed to cope with it (or vice versa), before telling us what I suspect most of us have already guessed: it’s a gigantic house of electronic cards. But ironically enough, part of the problem stems from the very transparency that the shift to electronic trading was supposed to bring with it:

Unlike the market of an earlier era, where humans executed trades by talking to (and shouting at) one another, the electronic communication networks (ECNs) that emerged in the late 70s logged every detail of every trade for later auditing. No more “he said, she said” when resolving a dispute or ferreting out fraud—just go to the tape. But then came the flood.

After a solid decade of moving almost all trading activity onto electronic systems (the NYSE floor is just there for show at this point), the market generates so much data that it’s nearly impossible for a mere governmental agency like the SEC to analyze. There are literally tens of thousands of quotes per second in hundreds of thousands of symbols across multiple electronic exchanges—the SEC would need the brain and computer power of the NSA to even begin to do a credible job of crunching this many numbers for a credible post mortem.

[…]

The amount of data isn’t just a problem for regulators. Much of the report details how the systems of the market participants were themselves overwhelmed in real-time with the sudden surge of digital information. Processing began to slow, queues filled, backlogs developed, and machines were eventually pulled offline as the humans intervened and tried to sort out possible data integrity issues.

Beyond the challenges of reconstructing events, the traders also use some subset of the data firehose that the market’s machines throw off today as input to train the algorithms that will run the market tomorrow. So at some point, we’ll wake up and realize that it’s really turtles machines all the way down. Put that in your bong and smoke it, Keanu.

Ouch. And it gets worse, too; go read the whole thing. I think the best way to sum it up in layman’s terms is that we’ve turned the financial markets into something a little like one of those “game of life” software ecosystems… which would be quite a fascinating idea if it weren’t for the fact that unexpected interactions within that ecosystem can affect meatspace in a pretty serious way.

The more I learn about derivatives and futures and all that “clever” quant stuff, the more I think it’s a bunch of hubristic mathematical voodoo bullshit that we’d do well to get shot of sooner rather than later; the only people it really seems to benefit are the wankers who thought it all up in the first place.

The ever-more-invisible (and uncontrollably emergent) hand of the not-actually-free market

Via Chairman Bruce, the US government is getting (more) worried about automated trading in the wake of last week’s largely-unexplained and possibly emergent “stock tornado”; insert aphorism about horses and barn doors here, possibly modified to suggest that the farmer has been letting the horse run the stud for years.

Investment bankers are naturally keen to point out all the benefits of automated trading and “dark pools”:

Goldman Sachs Group Inc., the most profitable firm in Wall Street history, has shared memos with lawmakers and SEC officials that say computer-driven trading and an increase in stock transactions that occur off public exchanges has reduced consumer costs and brought more liquidity to markets.

Well, if we can’t trust Goldman Sachs, who can we trust? #scathingsarcasm

Glitch trading: narrativizing the actions of algorithms

Having mentioned the sensitivity of the markets with respect to the UK election results, it makes sense to point out Tim Maly’s recent post about automated trading programs and market movements.

The point is that 60% of stock trades are being done by machines, operating according to a set of algorithms and inputs, which (I’m pretty sure) do not include natural language parsing of the news.

Yet whenever the stock market makes a move, the financial press constructs post hoc narratives that explain what’s happened as a reaction to the news of the day, as if the news is what was was motivating the trades. […]

This fascinates me. Most stock market trading is being done by machines, but the stories we tell ourselves are about humans responding to new information. You can’t interview an algorithm about why it made a certain choice. In the absence of that knowledge, it seems clear that the financial press just makes educated guesses and acts as if correlation is causation. It’s speculative fiction.

Discuss. 🙂

Another high-frequency trading software theft allegation

Remember the story about the guy who’d allegedly “stolen” (more accurately, downloaded a copy of) the Goldman-Sachs automated trading algorithm software? Well, now a young banker formerly employed by Societe Generale is accused of a very similar crime.

There’ll probably be plenty more incidents like this as time goes by: copying code is a pretty easy thing to do (even if avoiding detection isn’t) and the temptation of an investment-bank-level income is surely enough to justify the attempt to someone with a big enough greed-on (which is presumably a given in the industry in question). If only some egalitarian copyleftist hacker type would pilfer those algos and post ’em to Wikileaks… the anger and frustration of investment bankers would be reward enough for me, had I the pertinent skills. Hell, I think I could probably even ride out the entire jail term with a shit-eating grin on my face.

Interesting side-note: stealing this sort of software is illegal, even though the software itself may be considered to provide an illegal advantage to its owners.