If a piece of very-well-written computer software can produce classical music compositions that the experts can’t distinguish from ones created by people, does that mean that music is essentially meaningless? [via MetaFilter]
And doesn’t it also mean that the program in question essentially passed a version of the Turing Test?
7 thoughts on “Emily Howell has written a song for you”
Computer generated music is no more essentially meaningless than human generated art. A computer and a human could both slap paint (or notes) on a page at complete random. Similarly, a well-programmed computer would express some intent (probably its programmers) in the art it produced, just as a well-trained artist would express their intent through their honed skills. And since meaning is in the eye of the beholder anyway, and we’ve got 6+ billion beholders available, I think “inherently” meaningless is likely impossible.
And no, artistic expression (static communication) is not a Turing test. A Turing test requires full-duplex communication, to allow for the human to test the edge cases of the computer’s intelligence (instead of just observing a perfectly programmed charade). Not that the Turing test is a well-formed test, anyway: many people treat other people as non-sapient entities, even with all the physical evidence (same species, using language, etc) to the contrary. A test based on that same “sense of humanity” is as likely to fail because of the underlying testing mechanism, as it is from any lack in the tested.
Maybe the use of the word “meaning” is the problem. It seems to be based on the idea that music can be likened to a message: but are we sure that this is the case? We find the best music moving; we don’t get information from it. So we should maybe talk about the “effect” of music, not its meaning. Looking at it this way, knowing *how* such music has been created becomes less important.
Actually, there are experiences (say, looking at a mountain landscape) that -to many of us- elicit a sense of beauty similar to that created by music: but do not depend on human creativity. Does this mean that such experiences are “meaningless” too?
When I listen to my favourite music, I feel as if something is “clicking in place”: the music seems to perfectly fit something that was already in my mind (brain?). This, for me, supports the idea that music does not contain beauty (or “meaning”) in itself, but instead is a tailored set of perceptions designed to stimulate something that our brain is capable of. And, by the way… this does not diminish the beauty of music!
Seems to me this is a panic for those people who don’t understand that the brain is a machine.
Ehh. I listened to a sample of “Emily Howell” and it didn’t sound like anything other than endless variations on a limited repertoire of style and theme. It took Cope decades to write a program that didn’t do anything except mimic what he’d taught it to do, and it had to have everything defined for it — it even had to have rules to tell it when to break rules. There *is* a difference between inspiration and mere assembly.
The real Turing Test would be something that can pass the Turing Test when you *know* it’s a Turing Test.
Still, I am reminded of an amusing line from George F. Walker’s play THE ART OF WAR; protagonist Tyrone Power, world-weary journalist, says to Canada’s “Minister of Culture”, “What’s all the fuss? There are only twelve notes. Musicians just move them around.”
Next to come: 100%-synthesized singing that is done as well as, or better than, Luciano Pavarotti, Barbra Streisand, Bing Crosby, etc. Perhaps human-generated entertainment may not last forever?
I think this mostly just showcases the talent of whoever wrote the program. I don’t see how this is much more than a very sophisticated version of the aleatoric pieces John Cage was writing 50 years ago. I think the credit still lies with the human creator in this case—it’s not clear that the computer program is doing anything “creative,” even though Cope may not know what kind of score it will spit out.
Also, to Stephen J., it sounds like the listeners /did/ know it was a Turing test and still couldn’t pick out the difference.
To elaborate, I think a more novel version of a music creating robot would be able to develop its own “tastes,” rather than relying on some human input. Upon listening to however many hours of random music (like a growing child), it would begin to spontaneously pick up on ideas or tropes within the music, and furthermore, distinguish which tropes were the most interesting (at least to its “ears”). It would then have to find ways to apply these tropes in novel ways.
I’m just not seing that here.
Comments are closed.