top of page
  • LinkedIn
  • Vimeo
  • ArtStationLogoGrey
Search

Production – Bringing the robots to life with Mocap

  • Writer: Travis Parkes
    Travis Parkes
  • Apr 1, 2023
  • 4 min read

From the outset, I knew I would be utilizing motion capture (Mocap) to bring the robots to life. This decision was based on both the time-consuming nature of animation, which deviated from my project’s focus on VFX, and my belief that capturing the authentic movements of the human body, devoid of stylization and filled with subtle nuances, would better suit the intended live-action feel of the short film.

However, a significant setback occurred due to the fire at the tech park, which resulted in a considerable delay to my planned mocap session. This raised concerns because while mocap would save time compared to hand animation, my project was more intricate than any previous mocap work I had done. It required precise synchronization with the camera, accurate timing for dialogue (which hadn’t been recorded yet due to the fire), and I was already beginning to doubt the length of the short film, despite the initial cuts I had made during the writing process.

Additionally, during the meeting to arrange the mocap stage booking, I discovered that it had been divided into two smaller stages. This posed a problem for the first shot of scene 2, which involved a relatively long walking sequence with one of the robots, where the camera followed without any cuts. The use of these smaller stages also concerned me because I needed two actors simultaneously, but they weren’t equipped with enough cameras to ensure the same level of precise capture that I had previously experienced with the previous mocap setup.

Nevertheless, after undergoing retraining on the mocap stage (required not for technical ability but for permission to use the stage) and carefully measuring the filming area with a tape measure to ensure actors stayed within bounds, I simply did my best on the stage. For three hours.

Initially, everything appeared to be progressing smoothly. I successfully rigged each actor, even incorporating finger rigs for each hand without much trouble. Their performances were seemingly captured accurately even when they shared the stage simultaneously. There were still issues though.

The first problem arose when I realized I had cast the actors based on their height rather than their performance. Thankfully, this was easily resolved as the taller actor possessed incredibly expressive body language, making them ideal for portraying my smaller robot. Conversely, the shorter actor excelled in conveying stiff robotic movements. By adjusting the skeleton scaling during post-production and paying careful attention to proper eye lines, I didn’t perceive this as a significant problem.

On a few occasions, I had to recalibrate the taller actor’s rig due to sudden issues. I suspect this may have been caused by their arm span being nearly as long as the stage itself, resulting in errors and ultimately corrupting the rig calculation. Additionally, the tracking markers on the actors’ fingers, which were well-worn, frequently fell off, compounding the challenges. These were issues I likely would not have encountered a year ago using the same equipment.

The performance capture proved to be quite stressful. I had enlisted the help of a fellow student for the project, as they wanted to record their motion capture data simultaneously. However, this only added to the stress, as the increased number of people (including their own assistants) created an unprofessional environment. While I appreciated their help in finding equipment like chairs to improve the motion capture, there were unnecessary delays caused by people attempting to make “jokes.” It was a difficult situation because, although their behavior annoyed me, there was no hierarchy, and I couldn’t simply dictate their actions. I had to maintain a decent attitude towards them instead of resorting to shouting and reprimanding, even if they occasionally acted childishly.

All of this was on top of the fact that I had planned several long takes, which was challenging for my actors to keep up with in terms of timing. To assist them, I read out the lines and tailored my instructions for each individual. For the actor playing the shorter robot, I encouraged them to be expressive and over the top. As for the actor playing the taller robot, I provided more literal instructions, specifying actions like “on this line, you look up, then left, as two separate motions, and quickly raise your arm in the middle of it.” It seemed that the latter actor performed better when not overly focused on the final imagery.

Scenes took longer than anticipated, so midway through, I decided to set a cut-off point in my script. I knew we wouldn’t have enough time during the mocap stage, nor would I have sufficient time to complete every planned shot. However, since I couldn’t simply end the short at its current point in the original script, I provided fairly vague performance instructions. This allowed me to insert a line of dialogue and still maintain some coherence. I knew I might later regret this decision, but due to the delays caused by the tech park fire, I didn’t have the time to properly plan a new ending or reshoot scenes. Given the circumstances, I did what I believed to be the best course of action.

Recent Posts

See All
Shot Breakdown

Below is a video that showcases the wipes of a typical shot, in this case shot 10 of scene 1.

 
 
 

コメント


© 2023 by TRAVIS PARKES.

bottom of page