Avatar techniques could turn back the clock on ageing actors

Paul Raven @ 18-01-2010

Via SlashDot comes a brief soundbyte (or rather textbyte) from James Cameron, who posits that the pore-deep photorealistic CGI techniques that allowed him to create current box-office smash Avatar could be used to recreate the youthful looks of popular actors who’re getting on a bit…

… Cameron’s facial scanning process is so precise—zeroing in to the very pores of an actor’s skin—that virtually any manipulation is possible. You may not be able to totally replace an actor—“There’s no way to scan what’s underneath the surface to what the actor is feeling,” the director notes—but it is now theoretically possible to extend careers by digitally keeping stars young pretty much forever. “If Tom Cruise left instructions for his estate that it was okay to use his likeness in Mission Impossible movies for the next 500 years, I would say that would be fine,” says Cameron.

More Tom Cruise movies? After he dies? That’s about the strongest justification for banning this technology entirely, if you ask me…

Less fine, at least to Cameron, is bringing long dead stars back to life. “You could put Marilyn Monroe and Humphrey Bogart in a movie together, but it wouldn’t be them. You’d have to have somebody play them. And that’s where I think you cross an ethical boundary…”

Hmm… so what if you had Monroe and Bogart played by AI simulations of Monroe and Bogart, based on every second of footage available in the digital cultural corpus? Would that be crossing an ethical boundary? Would it be the same boundary as having someone else (made of meat) play them beneath the mask of CGI? And anyway, didn’t they threaten/promise [delete as appropriate] that CGI would mean the death of the overpaid “box office draw” Hollywood superstar? Answers on a postcard, please…

More seriously, though – how do we expect new and exciting actors to rise through the ranks if we just keep recycling the faces of the past? Or will the actors of the past become characters in their own right, adding another sort-of-meta layer to the cinema experience? “[Actor X] is currently wowing cinema-goers with his flawless performance as Clint Eastwood reprising the epochal Dirty Harry role…”


Of two minds

Tom James @ 22-04-2009

brain-simulationAn old science fictional argument: to what extent is it correct to characterise the human mind as a digital computer? According to this insightful article [via Charles Stross] many AI researchers have been making an error in their belief that the human mind can be thought of as a computer:

The fact that the mind is a machine just as much as anything else in the universe is a machine tells us nothing interesting about the mind.

If the strong AI project is to be redefined as the task of duplicating the mind at a very low level, it may indeed prove possible—but the result will be something far short of the original goal of AI.

In other news:

A detailed simulation of a small region of a brain built molecule by molecule has been constructed and has recreated experimental results from real brains.

The “Blue Brain” has been put in a virtual body, and observing it gives the first indications of the molecular and neural basis of thought and memory.

Is there a meaningful distinction between the traditional view of a strong AI and a molecular-level simulation of a human mind?

[image and article from the BBC]


U.S. military wants interactive virtual soldiers for the home front

Tom Marcinko @ 06-01-2009

spookyKnow how to make a virtual human avatar that could convincingly interact with a family member? If so, the U.S. Department of Defense wants to talk to you. Its Defense Centers of Excellence for Psychological Health and Traumatic Brain Injury is seeking proposals for a Virtual Dialogue Application for Families of Deployed Service Members.

We are looking for innovative applications that explore and harness the power of advanced interactive multimedia computer technologies to produce compelling interactive dialogue between a Service member and their families via a pc- or web-based application using video footage or high-resolution 3-D rendering. The child should be able to have a simulated conversation with a parent about generic, everyday topics. For instance, a child may get a response from saying “I love you”, or “I miss you”, or “Good night mommy/daddy.” This is a technologically challenging application because it relies on the ability to have convincing voice-recognition, artificial intelligence, and the ability to easily and inexpensively develop a customized application tailored to a specific parent. We are seeking development of a tool which can be used to help families (especially, children) cope with deployments by providing a means to have simple verbal interactions with loved ones for re-assurance, support, affection, and generic discussion when phone and internet conversations are not possible. The application should incorporate an AI that allows for flexibility in language comprehension to give the illusion of a natural (but simple) interaction. The current solicitation is not aiming to build entertainment, but a highly accurate and advanced simulation platform.

Slate.com columnist William Saletan seems to like the idea, though he does concede that “Critics call the proposal “creepy” and “dystopian.”

[Spooky Hologram by atmasphere]


One simulated metropolis per child

Paul Raven @ 09-11-2007

SimCity screenshot Games company Electronic Arts have donated Will Wright’s classic game SimCity to the One-Laptop-Per-Child project. Which is excellent news – not just because it adds to the educational arsenal of the machines but because SimCity is a great game that still holds up against current titles. [Image by Theogeo]

[tags]OLPC, games, SimCity, simulations, education[/tags]

Simulations – from guns to festivals

Paul Raven @ 16-08-2007

Via reBang, here’s an article on the ironically named Zen Technologies, an Indian company that specialises in training simulators that can teach everything from driving a truck to crack-shot sniping with an AK47. When you add this selection to other training devices like the virtual chainsaw, you realise we’re rapidly reaching a point where almost any high-risk activity can be experienced virtually.

But low-risk activities are catching up fast now the technology is more accessible; as soon as people get access to virtual worlds, they start recreating objects and events from the real world (even major festivals, like Burning Man’s SL incarnation), and fabbing technology means that objects that start their life as virtual can be made real and solid in meatspace … so how long before we need the equivalent of Customs and border controls between reality and everywhere else?