Tag Archives: artificial-intelligence

Spam: good food for growing AIs

wall of SpamIf you’ve been groaning in terror at the seemingly ever-growing contents of your spam folder, here’s a silver lining to the internet’s perennial plague – the ever-increasing ability of spambots to solve CAPTCHA puzzles may end up advancing the cause of artificial intelligence research. You see, it turns out that crime actually does pay:

“[von Ahn, inventer of the reCAPTCHA test] has seen bounties as high as $500,000 offered for software to break it – enough to attract people with the skills to the task and five times more than the Loebner Grand Prize offers to the programmer who designs a computer that can truly pass the Turing test.

The demise of reCAPTCHA could, however, be beneficial.

It has users decode distorted text taken from historic books and newspapers that is beyond the ability of optical character recognition (OCR) software to digitise. Humans who fill in a reCAPTCHA are helping translate those books, and spam software could do the same.

“If [the spammers] are really able to write a programme to read distorted text, great – they have solved an AI problem,” says von Ahn. The criminal underworld has created a kind of X prize for OCR.

That bonus for artificial intelligence will come at no more than a short-term cost for security groups. They can simply switch for an alternative CAPTCHA system – based on images, for example – presenting the eager spamming community with a new AI problem to crack.

Indeed, it appears that the Google gang are doing exactly that:

“… the Google researchers were apparently able to come up with the new technique simply by looking into areas that computer scientists had identified as being problematic for computer-based solutions.

They apparently came up with image orientation. Humans can apparently properly orient a variety of images so that the vertical axis matches the real-world orientation of the photograph’s subject; computers can only handle a subset of these. […]

The basic idea behind their scheme is that any functional system will first have to eliminate any images that an automated system is likely to handle properly, as well as any that are difficult for humans to orient. So, for example, computers are good at recognizing things like faces in group shots, as well as horizons in landscape scenes, both of which provide sufficient information to orient the image. In other cases, the image doesn’t have enough information for either humans or computers to properly sort things out—the paper uses the example of a guitar on a featureless background, which could be oriented horizontally, vertically, or in the angled position from which it’s typically played.”

I wonder if there’ll ever be an end to this particular arms race? And, if there is, will it be heralded by the arrival of the Canned Ham Singularity? [image by freezelight]

The silicon brain

neural networkMost attempts to simulate the function of organic brains using computers have been software simulations – models built with code, if you like. An international team of computer scientists have been trying the other approach, however: building computer hardware that mimics the dense interconnection of brain cells.

The hope is that recreating the structure of the brain in computer form may help to further our understanding of how to develop massively parallel, powerful new computers, says Meier.

This is not the first time someone has tried to recreate the workings of the brain. One effort called the Blue Brain project, run by Henry Markram at the Ecole Polytechnique Fédérale de Lausanne, in Switzerland, has been using vast databases of biological data recorded by neurologists to create a hugely complex and realistic simulation of the brain on an IBM supercomputer.

[snip]

The advantage of this hardwired approach, as opposed to a simulation, Karlheinz continues, is that it allows researchers to recreate the brain-like structure in a way that is truly parallel. Getting simulations to run in real time requires huge amounts of computing power. Plus, physical models are able to run much faster and are more scalable. In fact, the current prototype can operate about 100,000 times faster than a real human brain. “We can simulate a day in a second,” says Karlheinz.

A day in a second, huh? That’s straight out of your favourite Singularity sf story, right there. [image by neurollero]

Transhumanists talk a great deal about the inevitability of human-equivalent artificial intelligence in the very near future, and it’s easy to dismiss them as dreamers until you read an article like this. I’m not saying that silicon brainware means the Singularity is inevitable, or even likely… but I think I’ll start learning to speak in machine code. Y’know, just in case.

Wolfram Alpha: Answering the questions that matter…

whyPolymath Stephen Wolfram (famed for AOT Mathematica and his book entitled A New Kind of Science) has been developing a knowledge engine that uses a natural language interface. It’s a bit like Google except:

Where Google is a system for FINDING things that we as a civilization collectively publish, Wolfram Alpha is for COMPUTING answers to questions about what we as a civilization collectively know.

It’s the next step in the distribution of knowledge and intelligence around the world — a new leap in the intelligence of our collective “Global Brain.” And like any big next-step, Wolfram Alpha works in a new way — it computes answers instead of just looking them up.

So basically you type in a question in normal language and it should provide an answer, rather than links to webpages that might contain the answer.

Anyway Wolfram Alpha will be launching in May.

[via Charles Stross, ComputerWorld, Physorg etc][image from e-magic on flickr]

Bruce Sterling sideswipes AI evangelism

Bruce Sterling’s keynote speech at the Webstock conference in New Zealand last month contained the usual high concentration of non-fic eyeball kicks, and is well worth a read if you’re at all interested in the culture of the web, modern economics and the near future.

As usual, there are loads of provocative little asides nestled in the narrative, and I was particularly taken by this backhander to the face of artificial intelligence advocates:

I really think it’s the original sin of geekdom, a kind of geek thought-crime, to think that just because you yourself can think algorithmically, and impose some of that on a machine, that this is “intelligence.” That is not intelligence. That is rules-based machine behavior. It’s code being executed. It’s a powerful thing, it’s a beautiful thing, but to call that “intelligence” is dehumanizing. You should stop that. It does not make you look high-tech, advanced, and cool. It makes you look delusionary.

There’s something sad and pathetic about it, like a lonely old woman whose only friends are her cats. “I had to leave my 14 million dollars to Fluffy because he loves me more than all those poor kids down at the hospital.”

This stuff we call “collective intelligence” has tremendous potential, but it’s not our friend — any more than the invisible hand of the narcotics market is our friend.

Zing! I think we can be certain that Sterling doesn’t subscribe to any of the three schools of Singularitarianism.

Quantum cognition: spooky action in word recall

fractal_networkA fascinating article here at Physorg on how human beings remember and recall words. Researchers at Queensland University of Technology and the University of South Florida compare two ways of thinking about connections between similar words 1) Networks of similar words and 2) something analogous to spooky action at a distance:

…the researchers suggest that the probability of a word being activated in memory lies somewhere between Spreading Activation (in which words are individually recalled based on individually calculated conceptual distance) and Spooky Activation at a Distance (in which the cue word simultaneously activates the entire associative structure).

Most likely, Spreading Activation underestimates the strength of activation, while Spooky Activation at a Distance overestimates the strength of activation.

The researchers are using quantum physics as an preexisting abstract framework for their mathematical models for how human beings remember:

In the new model, associative word recall probability depends on how strongly connected the associated words are to each other.

For instance, “Earth” and “space” are entangled in the context of “planet,” but “Earth” and “gas giant” may not be entangled (though “Jupiter” and “gas giant” may be).

Words that are entangled with many other words have a greater probability of being recalled, while words that are entangled with few or no other words have a smaller recall probability.

At this stage this is theoretical, but the long-term consideration is for the development of AI and similar technologies:

As our information environment becomes more complex, we will need technology that can draw context-sensitive associations like the ones we would draw, but increasingly don’t as we lack the cognitive resources to do so.

Therefore, such the ‘meanings’ processed by such technology should be motivated from a socio-cognitive perspective.” This kind of research is an example of an emerging field called “quantum cognition,” the aim of which is to use quantum theory to develop radically new models of a variety of cognitive phenomena ranging from human memory to decision making.

Plenty of beef for the science-fictional burger bar.

[image from zeroinfluencer on flickr]