Test Shoot
- Travis Parkes
- Nov 25, 2022
- 3 min read
Although I had experience with various workflows, I had not yet created my own HDRI. Furthermore, while I liked the appearance of the robots, there was no guarantee that they would be compatible with mocap data or seamlessly integrate into live action.
For the sake of simplicity, I decided to shoot the test footage using the same camera I would use to capture the HDRI. Since I was least familiar with this aspect, I wanted to minimize variables so that if any mistakes were made, it would be easier to pinpoint the issue. Additionally, the actual shoot would utilize higher-quality cameras and deliberate shot choices. If I could make it work under less ideal conditions, I should be able to achieve success in the real shoot as well.
Although there were some issues with the camera automatically adjusting exposure while capturing bracketed images, I realized that I could redo the capture with the correct settings and achieve a relatively good match. Combining each image in Photoshop was computationally intensive, but when I proceeded to crop everything, the alignment appeared to meet an acceptable standard for my purposes.

Accurate exposure levels are kept in the 32 bit image with ghosting being fairly minimal too. You can make out the individual images throughout, but that can be fixed with simple paint work if necessary.

I managed to get a SynthEyes track down to an error of 0.58, the auto track wasn’t incredible but was workable with some manual finessing. Test geometry also seemed to hold up to an acceptable extent.

I ran into the most issues with Maya, there were frequent crashes when I tried to adjust the start point for the Mocap data. However, in the end, I still got the timings correct when I found a much less computationally expensive method.
Unfortunately, when I was doing this test, the tech park caught fire in a weather related accident. Whether it was preventable or not, this would have a large impact on my plan and the resources that would be available to me, including the Mocap stage. Luckily, I had data that I had previously captured for any tests.
The HDRI was fairly easy to setup and the only adjustment required was to match the rotation, a fairly easy task with the original footage and tracked camera for reference. An issue I need to return to is the test was not accepted by the render farm, whilst this shot could be rendered locally relatively quickly, there will be multiple shots and some likely longer. Textures were not loading correctly when rendered by the farm, and bizzare file names were being returned as well. All issues I had not encountered before.

Even for this one simple shot there is still a significant amount of comp work.

The shadows specifically needed a lot of trial and error due to me not being happy with the fall off produced when the robots made contact with the ground. However, with various eroding and manual creation of multiple shadow layers, I came up with something I was happy with.

Early versions also saw Maya’s motion blur being calculated incorrectly. The motion blur also proved to be problematic for the cryptomatte as it didn’t seem to have the same level of blur applied.

I opted instead to keep the RGB and AOVs without motion blur and to have Maya generate a separate motion vector pass. This made for much softer and realistic motion blur in the final shot, it will also give me more control for adjustments without it being baked in.
Whilst the final comp does still have issues, most notably the larger robot’s foot clipping through the ground (an issue that can be fixed by further separating the shadow pass in another render layer). I am very happy with this being a proof of concept and am more confident in my ability to pull off the final project.
ความคิดเห็น