Post-Production – An Outline of the Pipeline
- Travis Parkes
- May 25, 2023
- 5 min read
Although I have briefly mentioned various aspects of my post-production pipeline, it is essential to dedicate an entire section to walk through the process. This was a crucial step in ensuring that each shot was completed efficiently and maintained a consistent look.
Certain aspects of the pipeline were carried out simultaneously, and I have already provided detailed information about the look development and planning stages that preceded the main post-production work. Therefore, in this section, I will solely focus on the remaining steps involved in the process.
To start with I was dealing with a significant amount of footage.

Before I even chose the takes I desired, I organised this footage into folders.


The three main folders are categorized into reference clips and scene clips. Within the scene clips folder, each shot is further divided, and within each shot folder are the various takes.
Following this organisational method, selecting my preferred takes and creating a timeline was a straightforward process, as explained in the editing section of this documentation. Subsequently, I exported each clip as .dpx image files. When working with film scans or footage instead of computer-generated imagery (CG), .dpx file format is preferred over .exr.
Although .dpx has a lower color depth limitation, it adequately serves the purpose, considering the 32-bit limit of .exr is far beyond the 10-bit capacity of the Sony Fx6. Moreover, Nuke, in particular, efficiently reads .dpx files. Additionally, since the footage is still in log format and .exr files are inherently linear, it is easier to achieve accurate colour transformations using the slog3cine to rec709 LUT that I possess.
The next step, after denoising, undistorting (using the lens grid), and bringing the colour back to linear using the LUT, is to match move the footage. For most of the shots I just used the Nuke camera tracker, it is 4k footage with fairly simple motion, there was no point of leaving the Nuke ecosystem.
Where I had to go into an external tracker was in some seemingly simple shots such as the last. These shots had very little parallax which threw Nuke’s own camera tracking calculations completely off. With a dedicated match mover I was able to more precisely orient everything and give the software some idea of which tracks were being correctly calculated and which were not. Although even with this, some of the tracks were simply “more or less there”, with the part I wanted to put the CG being correctly tracked, even if elsewhere was slightly off.
Something I made sure of was to put the action of the scene (where the robots were standing) in the middle of the 3D scene, this means that even with different camera angles, scene positioning and scales for the robots can be reused from scene to scene. As the virtual camera is always focusing on the same area.
With the camera sorted, the comp is all setup for the any plate preparation I need to do. Most shots had some kind of paint and replace done, with this mostly being tracking marker removal, although occasionally I would have to paint out a person.
Before I could actually do the CG, I had to have the HDRI images in advanced. Luckily, they were in time/date order and the metadata of the camera meant I could easily filter different exposure stops.
I shot at 5 different stops, which means for each HDRI I had to create 5 different stitches in photoshop. Whilst this can be automated somewhat, compared to my test shoot I had unforeseen issues such as this:

Even after manually distortion correcting I had to make sure the crop was the same for each image and when layered on top of each other, everything was in the same place. This being particularly difficult is why the HDRI has some odd ghosting artefacts (although thankfully it did not impact its ability to light the scene).
The next bit of preparation for the scene was looking through the mocap data takes, cleaning them up, and exporting them.

For some of the shots I was working with a continuous take to ensure consistency of motion throughout the shots. But even for the takes that weren’t like that, there were still starting frames that involved getting into position and t-posing. So before any clean-up I would make sure to set the in and out points of the scene (this would be cross referenced with the plate to make sure it covered the shot length). This meant I wasn’t wasting my time doing clean-up on mocap data that wasn’t even going to be used.
After that I exported the data as an FBX, making sure to set the correct scene orientation as XYZ can correspond to different directions in different programs.
I then took this into maya and created a character definition for each skeleton so that it could be retargeted correctly.
After that, I put the robot rigs into the scene (a floor plane with a shadow catcher, a HDRI, and a camera), using an image plane of the backplate to ensure they were in the location I wanted. I imported the motion capture data as animation reference and made sure that the robots feet were touching the ground in the correct place.
At this stage I would do a render, the animation would likely need to be refined but making sure the comp was working for the scene was of a greater priority. For most shots I would have two renders, one with just the robots without the shadow catcher of the floor, and another of just the shadows on the floor created by the robots.
There is a shadow AOV that appears to work, but upon inspection it isn’t the same as simply turning off the primary visibility of the robots. The latter solution seemed to have much nicer falloff and appeared to have a greater sample rate. Whether I was doing something wrong or this is just Maya working in an unusual way, making two separate renders was the method I went with.
From there I could back and forth between Nuke and Maya, if needed to update a pose or adjust a certain eyeline, I just needed to replace the version of the render and it would be completely updated in Nuke procedurally. The only aspect that wasn’t procedural was the larger robots eye, that needed to be tracked every time for the lens flare to stick.
The shots I took into premiere to edit were prores files as editing image sequences doesn’t seem to work particularly well, not that 4k high bitrate files work well with premiere in general. These could also be replaced within the timeline with updated versions so I didn’t have to adjust the position and the specific cuts each time, I could just replace the footage with a new version and it would be reflected in the timeline.
This method of making sure things could be updated and replaced without adjustments to deal with that was key to my workflow. There was some need for certain steps to be done before others, but a lot of the time, tasks had to be done in parallel because seeing things in context reveals new issues.
コメント