Daniel Kade’s research will help motion capture actors to act better

Oct 31, 2014 | Research/Cooperation

Imagine you’re pretending to run through a forest, to duck under branches and to jump over fallen logs. But you’re doing it in an entirely empty room, on an even floor and without anything to avoid. Presumably your acting wouldn’t be particularly convincing. But what if you can see the virtual forest you’re pretending to move through? Or hear the branches crack? Daniel Kade is a doctoral student in computer science at Mälardalen University (MDH) and has developed a technology that helps motion capture actors with precisely those things. On Friday, 31 October, he defends his licentiate thesis.

A motion capture actor is used in the production of animated movies and computer games. The person acts while wearing a special suit that sends signals to a computer, which in turn transforms the actor’s body and movement patterns into a so-called avatar. The problem for such actors is that they, in contrast to ordinary actors, don’t have an environment, props or fellow actors to relate to.

– When creating animated characters, one obviously aims for results that are as human-like as possible. Motion capture actors provide digital avatars with realistic and credible movements and feelings. Unfortunately, today’s motion capture technology doesn’t offer a natural acting environment. Actors cannot see or otherwise experience the virtual milieu in which they're acting, says Daniel Kade. 

Daniel Kade came to Sweden from Germany three years ago, because he wanted to see the world and get a doctoral degree. At MDH, he has been able to combine his research in interaction design with his interest in game development.

– For me the focus has always been to create something user-friendly. The aim of my research is to investigate how I can help these actors to do a better job. In that way, the final outcome, for example a movie or a game, will be better too, he says.

What Daniel Kade and his research group have developed is two prototypes that both have the potential to facilitate the actor’s work. One is a camera that is attached to the actor’s head and that projects an image of the virtual environment in front of the actor. The environment changes depending on how the actor moves, and in that way an experience is created that is comparable to actually being there. The other prototype concerns controlled 3D sound – sound that comes from all corners of the room and that can be regulated by the director.

– If you record a war scene, for example, and a bomb is going to detonate, the director can choose when the sound is to occur and from which direction. The actor instinctively reacts to the sound. No matter how good you are at acting, you cannot simulate such a reaction. It is real, says Daniel Kade.

He’s looking forward to using the prototypes, and the aim is to synchronize image and sound, so that the experience is more comprehensive for the actor. Another objective is to develop the projector so that it doesn’t have to be fixed to the head.

– In the world of motion capture there is much talk about glasses or other solutions that enable the actor to feel as if being part of the fictional environment he or she is acting in. But one doesn’t quite know how to do it; after all, there is the safety aspect to consider. If two actors are to record a fistfight, they want to be able to see each other, or else anything may happen. There are several challenges, but our prototypes have great potential, and we hope that our technology is going to create more realistic animated characters in the long run. I will continue to develop the technology and try to earn a doctoral degree eventually, says Daniel Kade.