Tag Archives: artificial-intelligence

Posthumous cover-versions by famous musicians?

Dovetailing neatly with that piece about the Emily Howell program that composes pieces in the style of famous composers as well as its own, here’s another software company who are trying to develop software that will analyse a musician’s playing style from their recorded putput, and then reproduce other songs in the style in which they might have played them.

Or, to put it another way: they want to let you hear how Jimi Hendrix would have jammed out any national anthem you care to name. They’re not quite there yet, though:

As things stand now, Zenph’s technology looks at actual old recordings to find out how a performer played a certain song, and is not capable of figuring out how a musician would play a new part. “We hope — but we can’t demonstrate today — that after we’ve done several re-performances of a given artist, we will understand enough about that individual’s musical style to be able to suggest how that style might manifest itself in the performance of a work that the artist never actually performed,” said Frey, clarifying that today Zenph’s software only reproduces performances, it doesn’t create them.

That faint hint of white noise you can hear? That’s the sound of thousands of copyright lawyers rubbing their hands together in anticipation.

Artificial Flight – Dresden Codak spoofs AI skepticism

Aaron Diaz - self-portraitDresden Codak is one of my favourite webcomics; its creator, Aaron Diaz, is a staunch transhumanist, but rather than soapboxing directly he embeds his philosophical interests into his creative work. This occasionally spills over into brief satirical ripostes against anti-transhumanist naysayers; long-term followers may remember 2007’s “Enough is Enough – A Thinking Ape’s Critique of Trans-Simianism, which (justifiably) did the rounds of the transhumanist, science fictional and geek-affiliated blogo-wotsit at the time.

Well, here’s another one, Artificial Flight and Other Myths – a reasoned examination of A.F. by top birds“, which again takes the rhetorical gambit of reframing the AI argument outside of the human context:

We can start with a loose definition of flight.  While no two bird scientists or philosophers can agree on the specifics, there is still a common, intuitive understanding of what true flight is: powered, feathered locomotion through the air through the use of flapping wings.  While other flight-like phenomena exist in nature (via bats and insects), no bird with even a reasonable education would consider these creatures true fliers, as they lack one or more key elements. And, while some birds are unfortunately born handicapped (penguins, ostriches, etc.), they still possess the (albeit undeveloped) gene for flight, and it is indeed flight that defines the modern bird.

This is flight in the natural world, the product of millions of years of evolution, and not a phenomenon easily replicated.  Current A.F. is limited to unpowered gliding; a technical marvel, but nowhere near the sophistication of a bird.  Gliding simplifies our lives, and no bird (including myself) would discourage advancing this field, but it is a far cry from synthesizing the millions of cells within the wing alone to achieve Strong A.F. Strong A.F., as it is defined by researchers, is any artificial flier that is capable of passing the Tern Test (developed by A.F. pioneer Alan Tern), which involves convincing an average bird that the artificial flier is in fact a flying bird.

Diaz highlights the problem with anthropomorphic thinking as applied to definitions of intelligence, which is a common refrain from artificial intelligence advocates. Serendipitously enough, yesterday also saw Michael Anissimov point to a Singularity Institute document titled “Beyond Anthropomorphism”, which may be of interest if you want the argument fleshed out for you:

Anthropomorphic (“human-shaped”) thinking is the curse of futurists.  One of the continuing themes running through Creating Friendly AI is the attempt to track down specific features of human thought that are solely the property of humans rather than minds in general, especially if these features have, historically, been mistakenly attributed to AIs.

Anthropomorphic thinking is not just the result of context-insensitive generalization.  Anthropomorphism is the result of certain automatic assumptions that humans are evolved to make when dealing with other minds.  These built-in instincts will only produce accurate results for human minds; but since humans were the only intelligent beings present in the ancestral environment, our instincts sadly have no built-in delimiters.

Many personal philosophies, having been constructed in the presence of uniquely human instincts and emotions, reinforce the built-in brainware with conscious reasoning.  This sometimes leads to difficulty in reasoning about AIs; someone who believes that romantic love is the meaning of life will immediately come up with all sorts of reasons why all AIs will necessarily exhibit romantic love as well.

It strikes me that the yes-or-no question of whether strong general artificial intelligence is possible is one of a very special type, namely a question which can only be definitively answered by achieving the “yes” result. (I’m pretty sure there’s a distinct rhetorical term for that sort of question, but my minimal bootstrapped philosophy education fails to provide it to me at the moment; feel free to help out in the comments.) In other words, the only way we’ll truly know whether we can build a GAI is by building it; until then, it’s all just dialogue.

Attention, futurist gamblers: long odds on Artificial General Intelligence

Pop-transhumanist organ H+ Magazine assigned a handful of writers to quiz AI experts at last year’s Artificial General Intelligence Conference, in order to discover how long they expect we’ll have to wait before we achieve human-equivalent intelligence in machines, what sort of level AGI will peak out at, and what AGI systems will look and/or act like, should they ever come into being.

It’s not a huge sample, to be honest – 21 respondants, of whom all but four are actively engaged in AI-related research. But then AGI isn’t a vastly populous field of endeavour, and who better to ask about its future than the people in the trenches?

The diagram below shows a plot of their estimated arrival dates for a selection of AGI milestones:

AGI milestone estimates

The gap in the middle is interesting; it implies that the basic split is between those who see AGI happening in the fairly near future, and those who see it never happening at all. Pop on over to the article for more analysis.

The supplementary questions are more interesting, at least to me, because they involve sf-style speculation. For instance:

… we focused on the “Turing test” milestone specifically, and we asked the experts to think about three possible scenarios for the development of human-level AGI: if the first AGI that can pass the Turing test is created by an open source project, the United States military, or a private company focused on commercial profit. For each of these three scenarios, we asked them to estimate the probability of a negative-to-humanity outcome if an AGI passes the Turing test. Here the opinions diverged wildly. Four experts estimated a greater than 60% chance of a negative outcome, regardless of the development scenario. Only four experts gave the same estimate for all three development scenarios; the rest of the experts reported different estimates of which development scenarios were more likely to bring a negative outcome. Several experts were more concerned about the risk from AGI itself, whereas others were more concerned that humans who controlled it could misuse AGI.

If you follow the transhumanist/AGI blogosphere at all, you’ll know that the friendly/unfriendly debate is one of the more persistent bones of contention; see Michael Anissimov’s recent post for some of the common arguments against the likelihood of friendly behaviour from superhuman AGIs, for instance. But even if we write off that omega point and consider less drastic achievements, AGI could be quite the grenade in the punchbowl:

Several experts noted potential impacts of AGI other than the catastrophic. One predicted “in thirty years, it is likely that virtually all the intellectual work that is done by trained human beings such as doctors, lawyers, scientists, or programmers, can be done by computers for pennies an hour. It is also likely that with AGI the cost of capable robots will drop, drastically decreasing the value of physical labor. Thus, AGI is likely to eliminate almost all of today’s decently paying jobs.” This would be disruptive, but not necessarily bad. Another expert thought that, “societies could accept and promote the idea that AGI is mankind’s greatest invention, providing great wealth, great health, and early access to a long and pleasant retirement for everyone.” Indeed, the experts’ comments suggested that the potential for this sort of positive outcome is a core motivator for much AGI research.

No surprise to see a positive (almost utopian) gloss on such predictions, given their sources; scientists need that optimism to propel them through the tedium of research…. which means it’s down to the rest of us to think of the more mundane hazards and cultural impacts of AGI, should it ever arrive.

So here’s a starter for you: one thing that doesn’t crop up at all in that article is any discussion of AGIs as cult figureheads or full-blown religious leaders (by their own intent or otherwise). Given the fannish/cultish behaviour that software and hardware can provoke (Apple /Linux/AR evangelists, I’m looking at you), I’d say the social impact of even a relatively dim AGI is going to a force to be reckoned with… and it comes with a built-in funding model, too.

Terminator-esque dystopias aside, how do you think Artificial General Intelligence will change the world, if at all?

The sentient Love Machine: Second Life creator planning metaverse Singularity?

Regular readers may remember me mentioning LoveMachine Inc., the new project of Second Life creator Philip Rosedale, back in November of last year. At that point, all the signs pointed toward LoveMachine being a start-up that intended to develop a reputational currency system for virtual worlds… and for all we know, it probably still is.

But thanks to SL uber-journalist Wagner James Au, we hear that Rosedale and company have added another project to the company roster. Its title? “The Brain: Can 10,000 computers become a person?”

Rosedale has long been interested in artificial intelligence, and the metaverse would seem like the ideal platform for that sort of research. Rosedale is playing his cards close to his chest at this point (and the cynic in me suspects that there’s an element of publicity-seeking involved, which I’ve gone and indulged by posting about it), but given LoveMachine’s open-frame “pick a task and join the team” approach to recruitment and the number of floating tech geniuses in San Francisco, I’d guess he’s no less likely to make progress than anyone else in the same field… provided that’s where the company’s focus stays put, of course.

And there’s no guarantee of that, either. LoveMachine’s remit is somewhat peripatetic, as is its culture, with Rosedale and chums setting up shop for the day anywhere they can find comfy seats and free wireless internet. Even if the dreams of metaverse AI come to nothing, LoveMachine may end as a blueprint for a new sort of company that, as Au points out, sounds like something out of William Gibson’s early novels: a loose, ad-hoc collective of tech geeks and console cowboys, working wherever they can find a flat surface and some bandwidth, building new things in imaginary spaces.

Software that learns to recognise faces and voices like a child

camera-head stencilsA computer scientist at the University of Pennsylvania has decided to mimic the way children learn to recognise faces and voices in order to speed up the artificial learning curve of intelligent systems:

Using novel learning algorithms that combine audio, video, and text streams, Taskar and his research team are teaching computers to recognize faces and voices in videos. Their system recognizes when someone in the video or audio mentions a name, whether he or she is talking about himself or herself, or whether he or she is talking about someone in the third person. It then maps that correspondence between names and faces and names and voices.

“An intelligent system needs to understand more than just visual input, and more than just language input or audio or speech. It needs to integrate everything in order to really make any progress,” Taskar says.

The information Taskar’s team feeds into the system is free training data harvested from the Internet. Attempts to teach computers visual recognition in the pre-Internet age were hampered in large part by a lack of training content. Today, Taskar says, the Internet provides a “massive digitization of knowledge.” People post videos, comments, blogs, music, and critiques about their favorite things and interests.

Hah! And they said YouTube would never do any real good! Taskar’s computer seems destined for a life of increasing frustration with irresolvable plot lines, though, as they’re training it by showing it episodes of Lost:

As Tasker’s team feeds more data about Lost into the computer—such as video clips, scripts, or blogs—the system improves at identifying people in the video. If, for example, a clip contains footage of characters Kate and Anna Lucia, after being taught, the computer will recognize their faces.

“The alogorithm is learning this from what people say, or from screenplays as well,” Taskar adds. “The screenplay doesn’t tell you who is who, but it tells you there’s a scene with [two characters] talking to each other.”

Taskar says the information the research has produced can be helpful in many ways, particularly in searching videos for content. Currently, if a father is searching for a photo of his daughter playing with the family dog in his gigabytes of photos and videos on his hard drive, unless the photo is tagged “daughter playing with dog,” chances are he isn’t going to be able to find it.

Well, that’s your consumer-level pitch, sure, but the system will be too large and ungainly (and expensive) for Joe Average for a long time. Tasker should probably talk to the UK government… that panoply of CCTV cameras keeps growing, and it costs big money to hire people to watch their output. And what could possibly go wrong with putting an automated recognition system in charge of crime prevention? [image by bixentro]