Science Fiction & Fantasy

lightspeed-728x90-burstsoffire

Advertisement

Nonfiction

Interview: Digital Lifeforms: Creating the Characters of Avatar & Tron: Legacy

Digital Lifeforms by Andrew Penn RomineFor most of our existence, humanity has imagined playing God. In our myths and stories, we often dream of the ultimate act of a deity: Creating life. Stories of the medieval golem or more recent tales like Mary Shelley’s Frankenstein have illuminated our fervent desire to make for ourselves companions or servants. Yet for all our advances in biotechnology, we have yet to create even a few cells of artificial life, let alone something as complex as Frankenstein’s creature or Dr. Moreau’s Beast Folk. Despite the promises of Golden Age science fiction that the future would supply us with a steady stream of robot workers and helpmates, we’ve yet to see any true robot nannies, protectors, or even lovers.

There is one arena, however, in which the creatures we have given birth to in our imaginations have come to vivid, animate life: The movies.

The art and science of visual effects have crafted robots, creatures, and monsters in such visceral detail that we fear or empathize with them as if they were completely real. Audiences love the plucky Wall-E, pity poor Gollum, and gasp at the terrible might of King Kong. With budget the only limit, visual effects can bring to life anything that we can imagine.

Every year there are some films that push the boundaries of art and technology so far that they redefine our understanding of what “real” is. These films showcase digital characters so lifelike that we forget they are made of mere pixels on a computer.

The groundbreaking film Avatar, for example, envisioned the Na’vi and their planet of Pandora so completely that people saw the movie again and again just to immerse themselves in that compelling world. In The Curious Case of Benjamin Button and the more recent TRON: Legacy, digital characters doubled for real actors in the most convincing ways yet.

I recently had the opportunity to catch up with two visual effects professionals who worked on these groundbreaking films. Nolan Murtha, at Lightstorm Entertainment, worked as a Digital Effects Supervisor on Avatar. Steve Preeg, Animation Director at Digital Domain, won an Oscar© for his work on The Curious Case of Benjamin Button, and recently wrapped up on TRON: Legacy, which despite its “in-computer” setting, faced many of the same challenges as Button in bringing its digital characters to life.

I asked them about the process and challenges of creating the illusion of life in their respective films.

What do you think are some of the biggest challenges in bringing a digital character to life?

Nolan Murtha: I think creating the suspension of disbelief with digital characters is equally as difficult as doing the same thing with actors in makeup, puppetry, or robotics. Great care has to be put into each of these things or the audience will simply reject it. Great characters are emotionally engaging, and their appearance needs to convey and even enhance their feelings. Body language is of utmost importance.

Steve Preeg: Humans are so capable of determining if there’s anything wrong with another human that avoiding the “Uncanny Valley” is by far the most difficult task. With the character of Clu [in TRON: Legacy] it was probably even more difficult because people know what Jeff [Bridges] looks like at thirty-five years old. They all have a different idea of what he looks like because some people remember him from Starman, Against All Odds, and Tron. And he looks very different in all those movies, so it’s hard trying to live up to the idea of him when that idea varies from person to person.

You mentioned the “Uncanny Valley.” This is an oft-discussed concept in both VFX and robotics. It’s the idea that the more humanlike and realistic a character gets, the more we will accept it as real. But there’s a certain point at which that acceptance turns to revulsion, a “dip” in the positive emotional response curve. How did you overcome the audience’s built-in disbelief of digital characters and avoid falling into the “Uncanny Valley?” What is the most important thing to get right, and how you do get there?

Nolan Murtha: The realism in Avatar comes from a lot of places. We were able to greatly enhance the emotive quality of the characters by the addition of tails and the cat-like ears. The personality in animals like dogs and cats is really brought to life by these things. So not only were the facial features and movements of the characters critical, but they also enhanced the emotional response by giving the audience something that they are familiar with. This enabled both the Avatars and Na’vi to move past their humanoid appearance and further engage the audience in the story.

Steve Preeg: The whole process ends up being a lot more than the sum of its parts. In doing something like a human face, you could have something that you feel is really moving well in animation and then you get it into lighting, and all of a sudden something looks wrong. Now that’s not to say that the animation was right and the lighting was wrong. It’s that when you apply that lighting to that animation it may bring things to light about the animation that are wrong. Or vice versa. What we’ve found is that as we start “fixing” one thing about a shot, that may make other things look wrong that you thought were right. It turns out that when you sit in the room with thirty people, you’ll probably get twenty different answers as to what that wrongness is. And then of course you also have to appease the director. And he may go round round round and then you feel like you got it right on some particular very fine point, and then the director says, “Well, do this now.” And you say, “Are you sure?” and he says “Yeah.”

How much did you rely on performance-capture in your films to animate your digital characters? Do you still count on animation-by-hand to sell believability?

Steve Preeg: As far as major motion capture goes, in general, there’s no technology I know of that’s going to give you 100 percent of the performance. For TRON: Legacy, we used a helmet with four lipstick cameras trained on [Jeff Bridges’] face, with about 140 dots on his face. With those four cameras we could triangulate those positions in 3D space and get essentially about a 140-ish point cloud of data per frame on the primary camera. We have software internally here that will convert that into muscles that have to be active at certain percentages and at that point configuration, and it’s a great starting point. It’s going to give you timings, a good estimate of jaw motion versus skin motion, a solid basis to evaluate what you need to do next.

Nolan Murtha: During Avatar, we relied heavily on performance capture, not only for our human actors, but also for blocking out a very large part of creature and vehicle action. The samson and scorpion chases and battles with the banshees were all choreographed with scale models that we tracked. We’d scale up the motion of the puppets (puppetted by James Cameron and Richie Baneham, among others), and in our software you’d see banshees and gunships flying through the floating mountains. Jim got to pretty much design the action of the flight, creatures, and all of the performers exactly as he wanted it.

Steve Preeg: As I said, there’s not a system out there that is going to give you a [motion capture] solve that is going to understand the intent of the performance. So if you run a solve, and it tells you that your error is as small as it gets, and maybe even zero, you still may look at the performance of the actor himself and say, “I’m still not seeing that little bit of sadness in his eyes or that little bit of sarcasm in his smile,” or something like that. And that’s when we have to go in with animators and actually tweak it.

Nolan Murtha: When it came to performers, the motion acquired was very carefully processed in each step of the pipeline to be as true to the original performance as possible. We relied heavily on high definition reference shots from numerous angles during capture. This archive of the performance was used by each motion editor and then later by the animators to recreate what was done [in choreography]. Animators spent a lot of time on hands, fingers and tails, as we didn’t capture them in high detail, but we could see exactly what they were doing by looking at the reference footage. Weta and our other VFX teams would obviously spend a lot of time giving these actions realistic movements, but the time/space relationships were maintained throughout out the process.

With the realism of the characters in Avatar and the successful doubling (and de-aging) of well-known actors in Benjamin Button and TRON: Legacy, a lot of people are wondering if we are going to see a wholesale replacement of actors anytime soon. This is a question we hear every few years as the technology advances. Care to weigh in on the debate?

Nolan Murtha: I think that is just something people like to throw around. To me, there is certainly a difference between a great actor and one of mediocre ability. No matter how good digital characters become, we are still going to need to base our work on an actual performance. For example, The Curious Case of Benjamin Button succeeds because of Pitt’s performance, which is enhanced by the face technology Digital Domain developed. In fully animated features, the animators videotape themselves acting out scenes to see what their bodies are doing. You’d be hard pressed to find a face animator without a mirror on his desk! We simply recreate a great performance, which with Avatar started with Sam [Worthington] and Zoe [Saldana] and the many other performers and stunt teams that brought our characters to life.

Steve Preeg: We’ve done two films with this sort of technology. I think that every aspect of it has a ways to go. We need much higher fidelity in animation capture, higher definition cameras, higher frame rates—those are all going to help give us better fidelity in the capture. Newer skin shading models, newer animation solves, newer lighting techniques. So there’s a long ways to go in my opinion. Now whether or not it can ever happen is a good question, because if someone walks into a movie theater and sees a thirty-five-year-old Jeff Bridges, they know he’s not real. There’s no question that he’s not real, because everyone knows he’s sixty.

Many films use software such as Massive to program huge armies of digital characters not only to fight each other, but also to respond to stimuli such as terrain or environment. What is the value of simulating digital characters this way? Are there tradeoffs in realism?

Steve Preeg: Those programs like Massive are going to be as good as your mo-cap selections and “brain-writing” are. You can do things that are amazing, but if you don’t put time into the brain or your mo-cap doesn’t blend—or your mo-cap isn’t good in the first place—then you’re going to have problems. Crowd simulation packages are pretty neat, but they just get complicated.

Nolan Murtha: Simulations can certainly enhance the scope of a scene. But during Avatar, we captured dozens of crowd members with specific actions and sync across a scene, and they were placed and timed very specifically. Because of our ability to capture and visualize these crowds relative to the action of a scene, along with the fact that we knew the positions of the cameras far in advance, we could fill out a scene knowing what was going to be in camera and where we needed more crowd density. Obviously, crowd simulation software played a large role in the battle scenes.

What aspects of VFX technology do you feel are still the weakest links? Are there any particular problems you’re still trying to solve?

Nolan Murtha: Well, we will always be solving new problems. In the past year since Avatar has been released, most of the core team involved have been sitting around tables and white boards trying to figure out what we did wrong. We are always looking to improve our pipeline.

I’d say one of the most challenging issues is simply organizing the data. While shooting in the virtual world, Cameron can essentially change anything he wants in real time. Mostly this pertains to the environment—the performances are not really messed with. But the plants and the wardrobes and the crowds—all of the contents of a shot have to be tagged and extracted and recreated in various programs. The data handling requirements are extremely complex.

Steve Preeg: As soon as something human is known not to be real, we immediately start picking it apart to find out why it’s not real. I’ve actually shown others photographs of real people and said they were CG, and all of a sudden they’re picked apart. People will say the photos look wrong. They’ll say the eyes are glazed over, and in the end you say, “I was kidding, this is an actual photograph.” It’s interesting that when somebody feels like they’re being fooled, their brain changes modes, and they start coming up with critiques that maybe aren’t even valid.

I don’t know what percentage of people out there for example would go to the movie and say, “Oh I didn’t realize that was digital,” versus the people that would say, “I knew that was an effect, and I thought it didn’t work.” Whatever those percentages are, until you can get to where 100 percent of the people don’t know what’s going on, there’s still work to do.

What techniques are being developed now that you see revolutionizing the way things are done in the next few years? What sort of challenges excite you the most?

Nolan Murtha: A lot of the current game engine technology is very promising for virtual production. As most studios can tell you though, getting assets into a game engine and moving that data between proprietary DCC [digital content creation] software and that engine is still pretty tricky. I think as more and more groups and studios become more involved in this new field, we’ll see a lot more software with dedicated real-time features.

Steve Preeg: Well, if I knew that, I think I could ask for a raise!

If you can do just a fully CG human (and maybe that includes clothing and hair and all those things) that is indistinguishable from a real actor, where ninety-nine percent of the population believes it, that’s a pretty interesting challenge even if it might be a little ways off. We’ve been working a lot on the face, and I think there’s stuff we can still do, but if people want to start seeing body deformations that are totally believable on just a regular human, I’m not sure we’ve seen that before. But I really enjoy working on anything that’s a character or a creature, something that evokes some kind of emotion on a human level, like fear. Anything like that is exciting, because it’s going to bring new challenges.

That’s the one thing about the film industry—I don’t think I’ve ever met a director who’s said, “Just give me what was in that movie.” It’s always, “Give me something more than what was in that movie.” Any film that you have as a reference is always something that you’re supposed to do better than in your film.

Enjoyed this article? Consider supporting us via one of the following methods:

Andrew Penn Romine

Andrew Penn Romine

Andrew Penn Romine is a writer and animator living in Seattle. When he’s not wrangling words, robots, superheroes, or dragons, he dabbles in craft cocktails and sequential art. A graduate of the Clarion West workshop, his fiction has appeared in LightspeedEyedolon MagazinePaizoBy Faerie LightFungi, and Help Fund My Robot ArmyHe’s hard at work on a new novel. You can find him at www.andrewpennromine.com and on Twitter @inkgorilla.