Tag Archives: morality

But is it art? Modern Warfare 2, computer games and morality

Modern Warfare 2 - box artSerendipity, yet again… Jonathan’s latest Blasphemous Geometries column on the moral dimension of modern computer game mechanics arrived in my inbox last weekend, and hence (unless he has contacts in the industry of which I am unaware), he’d have had no idea that this week would see a firestorm of controversy over a certain level in the newly released Modern Warfare 2 game. The level in question requires your character to inflitrate a terrorist group and, so as to maintain your cover story, participate in the shooting of innocent civilians; MetaFilter has a good round up of reviews and opinion pieces on the game, and the comment thread is full of interesting responses from both sides of the fence.

The predominant question seems to be whether this sort of gameplay can be considered “right” – yet another iteration of the “do computer games cause/encourage violence” debate, which itself rolled on from a similar public angst around the proliferation of graphic horror movies in the late eighties. There have been numerous surveys and research projects designed to accumulate evidence around this idea, but to the best of my knowledge there’s been nothing truly conclusive either way – though my instinct as a formerly rabid gamer (I don’t have the spare time to play often any more) suggests to me that computer games do no more to encourage violence than Saturday morning kid’s cartoons.

Indeed, it occurs to me that the rightness or wrongness of the “No Russian” level of Modern Warfare 2 – the hand-wringing over whether such a thing should be allowed to go on sale – is a false dilemma; the more pertinent question is that of what it says about the world in which it exists. Plenty of commentators are branding it tasteless, and I have a certain sympathy with that viewpoint – but there’s a lot of things out there that I consider tasteless, and I don’t believe that things should be made to go away just because I don’t approve of them. Censorship should start (and end) with your own finger on the off button, IMHO.

But thinking about the plot of the level (and of Modern Warfare 2 as a whole, from what I’ve been able to glean from reading reviews and opinion pieces about it) from a writer’s (and reader’s) point of view, it actually makes a lot of sense in the context of modern counter-terrorist narratives, with the result that it puts the player into a morally questionable situation that reflects the world beyond the game… though exactly how accurate that reflection may be is open to debate. Perhaps there is a valid argument to say that games like this might put ideas in people’s heads, and end up glorifying what they’re supposed to demonise (if there’s any real difference between those two words beyond one’s personal moral code), but I suspect that the sort of person who’d be encouraged to acts of random violence against innocent civilians by media of any sort is already psychologically predisposed to such an action. And if computer games are a nefarious way of seducing the impressionable with the power-trip of consequence-free violence, what then should we think about the United States Army, which has used the taxpayer-funded computer wargame America’s Army as a recruiting tool since 2002? Is it OK to encourage violence so long as it’s against the right targets?

[Related bonus item -did you see the article at Wired about the Libertarian-penned “2011: Obama’s Coup Fails” web-based strategy game? If nothing else, that demonstrates amply that when people encode a political or ideological subtext into a game to the detriment of plausibility, the end results are invariably laughable. And that’s not a partisan statement, either; I’m pretty sure that even were the boot on the other political foot I’d be equally amused by (and disgusted at) the incredible crudity of the sermonising, which reeks of the same childish mudslinging that’s currently packing UK news venues as the incumbent Labour government enters its final earth-bound tailspin and the vultures of opportunism don their bibs.

But then last night I watched an excellent documentary on the history of the Berlin Wall, and found myself laughing at the crudity of the archive propaganda from both sides of the Iron Curtain… before I remembered that, to be effective, propaganda only needs to be slightly more sophisticated than the average media literacy of its target audience.]

Laughter and error-correction mechanisms

lightCarlo Strenger has written a good article on enlightenment values on Comment is Free:

…the Enlightenment has created an idea of immense importance: no human belief is above criticism, and no authority is infallible; no worldview can claim ultimate validity. Hence unbridled fanaticism is the ultimate human vice, responsible for more suffering than any other.

it applies to the ideas of the Enlightenment, too. They should not be above criticism, either. History shows that Enlightenment values can indeed be perverted into fanatical belief systems. Just think of the Dr Strangeloves of past US administrations who were willing to wipe humanity off the face of the earth in the name of freedom, and the – less dramatic but no less dismaying – tendency of the Cheneys and Rumsfelds of the GW Bush administration to trample human rights in the name of democracy.

As one of the commenters points out, the profound principle has been ignored by both 20th century secular ideologues, religious authorities, and more recent fanatics, is that of always bearing in mind the possibility you might be dead wrong.

The healthy human response to harmless error or misunderstanding is to have a laugh. Thus error is highlighted for all to see and forgiven by all parties. As Strenger puts it:

At its best, enlightenment creates the capacity for irony and a sense of humour; it enables us to look at all human forms of life from a vantage point of solidarity.

A further mistake on the part of humorless fanatics everywhere is to assume that there can ever be one, and only one, eternal truth. It may be that such a thing exists, but it is likely to be beyond our capacity to discern its true form from the vague shadows on the walls of our cave.

And so human beings are prone to error. There’s no problem with this, as failure teaches us more than success.

This notion was articulated by Karl Popper in the 20th century: it is the idea that you can never conclusively prove that an idea is correct, but conclusively disprove an incorrect idea.

And so human knowledge grows and the enterprise of civilization advances, one laughter-inducing blooper at a time.

[image from chantrybee on flickr]

Is Twitter a threat to morality and ethics?

Texting Are Twitter and other rapid-fire forms of media eating away at our moral and ethical cores?

Possibly, say the authors of a new study from a University of Southern California neuroscience group led by Antonio Damasio, director of USC’s Brain and Creativity Institute. (Via EurekAlert.)

In the study (being published next week in the Proceedings of the National Academy of Sciences Online Early Edition), the researchers used real-life stories to induce admiration for virtue or skill, or compassion for physical or social pain, in 13 volunteers (verifying the emotions through pre- and post-imaging interviews).

They found, using brain imaging, that while humans can respond in fractions of seconds to signs of physical pain in others, awakening admiration and compassion take much longer: six to eight seconds to fully respond to the stories of virtue or social pain, in the case of the study.

So, what does that say about the emotional cost of relying on a rapid stream of short news bits pouring into the brain through online feeds or Twitter?

Lead author Mary Helen Immordino-Yang puts it this way:

“If things are happening too fast, you may not ever fully experience emotions about other people’s psychological states and that would have implications for your morality,” Immordino- Yang said.

She worries that

fast-paced digital media tools may direct some heavy users away from traditional avenues for learning about humanity, such as engagement with literature or face-to-face social interactions.

Immordino-Yang did not blame digital media. “It’s not about what tools you have, it’s about how you use those tools,” she said.

(USC media scholar Manuel) Castells said he was less concerned about online social spaces, some of which can provide opportunities for reflection, than about “fast-moving television or virtual games.”

“In a media culture in which violence and suffering becomes an endless show, be it in fiction or in infotainment, indifference to the vision of human suffering gradually sets in,” he said.

Damasio agreed: “What I’m more worried about is what is happening in the (abrupt) juxtapositions that you find, for example, in the news.

“When it comes to emotion, because these systems are inherently slow, perhaps all we can say is, not so fast.”

How do you feel about that?

Take your time.

(Image: Wikimedia Commons.)

[tags]Twitter,social media, computers, communication, ethics, morality[/tags]

Regulating military robots

triple-gun robot droneFollowing on neatly from Tom’s post about the Pentagon’s future war brainstorms and the US Office of Naval Research’s recent report on battlebot morality, philosopher A C Grayling takes to his soapbox at New Scientist to warn us that we need to regulate the use of robots for military and domestic policing uses now… before it’s too late.

In the next decades, completely autonomous robots might be involved in many military, policing, transport and even caring roles. What if they malfunction? What if a programming glitch makes them kill, electrocute, demolish, drown and explode, or fail at the crucial moment? Whose insurance will pay for damage to furniture, other traffic or the baby, when things go wrong? The software company, the manufacturer, the owner?

[snip]

The civil liberties implications of robot devices capable of surveillance involving listening and photographing, conducting searches, entering premises through chimneys or pipes, and overpowering suspects are obvious. Such devices are already on the way. Even more frighteningly obvious is the threat posed by military or police-type robots in the hands of criminals and terrorists.

As has been pointed out before, the appeal of robots to the military mind seems to be that they’re a form of moral short-cut, a way to do the traditional tasks of battle and control without risking the lives of real people. But as Grayling says, that’s a short-sighted approach: it’s not a case of wondering if things will go wrong, but when… and then who will carry the can?

Call me a cynic, but I doubt the generals and politicians will be any keener to shoulder the blame for mistakes than they already are. [image by jurvetson]

BattlefieldMorality2.0

Terminator statueTo brighten your Monday morning, here’s some speculation on robot morality – though not one of the usual sources. Nick Carr bounces off a Times Online story about a report from the US Office of Naval Research which “strongly warns the US military against complacency or shortcuts as military robot designers engage in the ‘rush to market’ and the pace of advances in artificial intelligence is increased.”

Carr digs into the text of the report itself [pdf], which demonstrates a caution somewhat at odds with the usual media image of the military-industrial complex:

Related major research efforts also are being devoted to enabling robots to learn from experience, raising the question of whether we can predict with reasonable certainty what the robot will learn. The answer seems to be negative, since if we could predict that, we would simply program the robot in the first place, instead of requiring learning. Learning may enable the robot to respond to novel situations, given the impracticality and impossibility of predicting all eventualities on the designer’s part. Thus, unpredictability in the behavior of complex robots is a major source of worry, especially if robots are to operate in unstructured environments, rather than the carefully‐structured domain of a factory.

The report goes on to consider potential training methods, and suggests that some sort of ‘moral programming’ might be the only way to ensure that our artificial warriors don’t run amok when exposed to the unpredictable scenario of a real conflict. Perhaps Carr is a science fiction reader, because he’s thinking beyond the obvious answers:

Of course, this raises deeper issues, which the authors don’t address: Can ethics be cleanly disassociated from emotion? Would the programming of morality into robots eventually lead, through bottom-up learning, to the emergence of a capacity for emotion as well? And would, at that point, the robots have a capacity not just for moral action but for moral choice – with all the messiness that goes with it?

It’s a tricky question; essentially the military want to have their cake and eat it, replacing fallible meat-soldiers with reliable mechanical replacements that can do all the clever stuff without any of the attendant emotional trickiness that the ability to do clever stuff includes as part of the bargain. [image by Dinora Lujan]

I’d go further still, and ask whether that capacity for emotion and moral action actually obviates the entire point of using robots to fight wars – in other words, if robots are supposed to take the positions of humans in situations we consider too dangerous to expend real people on, how close does a robot’s emotions and morality have to be to their human equivalents before it becomes immoral to use them in the same way?