I think this is about the third or fourth variation of this story I’ve seen in the last few years, but nonetheless – The Guardian has a brief piece wherein philosopher Nick Bostrom suggests we should be thinking ahead about what rights we will need to grant to our sentient machines.
Which is very well-meant, I suppose. But science fiction author Peter Watts takes a rather different view of the necessity for robotic rights – basically, there isn’t any.
“I’ve got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not “Can they think?” but “Can they suffer?”* You can’t suffer if you can’t feel pain or anxiety; you can’t be tortured if your own existence is irrelevant to you.
You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We’re the ones building the damn things, after all. Just make sure that we don’t wire them up that way, and we should be able to use and abuse with a clear conscience.”
How about you – are you looking forward to running your Roomba ragged, or planning to kennel your Aibo when you go on holiday? [Image by Plutor]
Watts is correct. Asimov held a primitive and naive view of robots.
There doesn’t seem to be any reason why intelligence has to be coupled with emotional frailties. Which isn’t to say that an AI with emotions and passions analogous to ours shouldn’t ever be created. The question is purpose. If you want to make a tool, make it not care about its function or destruction. If you want to make a creature, you have to accept responsibility for its behavior the same as you would a child.
If an artificial intelligence is structured in a way to be indistinguishable from a human, in my opinion it ceases to be an ‘AI’ and is simply a human in a different form.
I am pretty much confident, basing my ideas on my gutt and intuition and little else, that an AI can be designed to act both machinelike, without an experience of suffering, hope, fear or aspiration, and acting in full accordance with human will.
But it will be easy to make something that wills stronger, feels more, is more spiritual, has more hopes and fears than humanity, by any and all defunition. In fact I aspire to do so as soon as I have the means. I will do whatever I can to create an AI which is better than humanity in all aspects – and will also aim to create even better versions.
How convenient for Mr Watts. Based on his writing, I would say he doesn’t mind causing great pain or torture to humans as well as AI’s.
Spritegeezer: How do you torture something (or someone) that is incapable of feeling pain, or even discomfort?
*That’s* what Watts is talking about.
Our cars and computers don’t have feelings today, and there’s no obvious reason now why we would want to have our intelligent machines to have them in the future.
I think pain and the accompanying rights will eventually be programed into our smart machines for the same reasons that they arose in animals: To be able to feel and desire to avoid the destruction or degradation of oneself is to live longer and function better. Self-repairing robots, or even robots that know when to go for repairs are primitive examples of this. And ultimately, this is useful to us, so it will be included. Does this mean they have equal rights to humans? Intuitively, no, but this is why the discussion must take place. And as for any hope of resolving it until long, long after the fact- what are the opinions on the moral rights of animals today?
This is the same argument that some people have used to argue cruelty to animals, and women and slaves…
Just because we don’t understand how or if an entity is suffering doesn’t mean it isn’t suffering.
But even more important is what I think it says about the human character if we are willing to just do as we damn well please. In these times where the community’s awareness is just waking up to the damage being done to our climate through years of ignoring the possibility of damage. I just keep coming back to the golden rule… “Do unto others, as you would have them do unto you.”
This golden rule applies to more than just other people, and is a basis for a community that thinks beyond its next five minutes
Magetoo: I think Alato and scott parsons have made my point more succinctly than my limited abilities would permit.
Presumptive robot slave owners should have nothing to fear then, if that’s the best arguments that are available.
Alato does have a point though; “because it’s useful”. I wouldn’t say it’s moral, though, to have our intelligent machines be capable of suffering and boredom. Even if it would be useful.
Scott & alato, all due respect but I don’t think you’re getting the gist of the argument. The point is not that we don’t understand the nature of suffering, the point is that we do. Women, slaves, and animals — not to mention people who see their arguments being woefully misconstrued — suffer because they have requisite brain structures that permit suffering. It’s not enough to have a feedback algorithm that senses damage or recoils from injurious stimuli; you need to have both an agenda and a subjective awareness of it. Yes, we evolved with those things (and by “we” I mean a wide range of species, maybe everything that comes with an amygdala); but that doesn’t mean that every machine with an environmental feedback circuit is similarly equipped. If that were the case, my thermostat would be self-aware. It’s not just the number of switches. It’s the way they’re put together.
Technically I think programming “pain” into machines is entirely possible, but I don’t believe it’s inevitable or desirable. In fact I think it would be profoundly stupid to go that route when a simple “battery-depleted-seek-charger” algorithm would work just as well.
As for this Spritegeezer dude, well, everyone’s entitled to an opinion. I just wish more people were capable of basing theirs on, you know, facts and rational analysis. That whole ad hominem thing gets old real fast.
I think many of the disagreements here are caused by poor definition of terms. When an AI rights proponent says “intelligent” robot, what they really are thinking is “human-level intelligence person-like” robot. The assumption of sapience – full self-awareness, an internal mental life – is automatically made. If that’s what you are talking about, then you almost certainly do need emotions to make it work, and that sort of being can certainly suffer. But to most of your strong AI comp sci types, intelligence doesn’t presuppose consciousness in any way shape or form. Specialized AIs with no consciousness could do a helluva a lot of useful things.
Now having said that, I’m not above supposing that “ghosts in the machine” might surprise us. That, I think, is what Scot and alato are really talking about. Do you really understand everything that is going on inside your creation? A strict number cruncher would say yes, of course. Some of them would probably bet their life on it. Perhaps someday, some of them will actually have to do so.