Recently, we touched base with some of the team members/Cogswell Students that had worked on the Project X short film, Driven. Given that no press has been generated other than a short blurb on the Cogswell website, we decided to reach out and hear what some members of the team had to say about working on the film.
The following text is direct from each person specified, and may or may not feature edits done in order to provide a smoother reading experience.
From Taylor Hodgson-Scott:
My Responsibilities on the Animated Short Driven:
I was the Lead animator on Driven, responsible for a heavy share of the 3D Animation. This involves making the characters and vehicles/bicycles move believably and have the characters emote in a way they can connect to the audience. As the lead, I also headed up the other animators to make sure their shots were consistent with the shots around them and the motion style we were targeting. Ultimately, the director had the final say, but delegating some animation critiques to me allowed him some time to allocate elsewhere in the production, and allowed other animators quick feedback.
I also compiled the reel, taking all of the latest animations/rendered shots and editing them together to view internally, and allow us to see the flow of the film and if each shot flowed into the next fluidly. Editing is important for capturing a feeling we need to convey- especially in the last third of the film when things are amping up, quick well-timed cuts are necessary for the feeling of speed.
PROGRAMS I/OTHERS USED!
3D Animation, Modeling, Rigging in Autodesk Maya 2011
Edited the film in Adobe Premiere Pro CS5
Texturing and Matte Painting done in Photoshop CS5
Rendered using the Renderman plugin for Maya 2011
Compositing was done in (Eyeon) Fusion (6)
About 4 months in Pre-Vis (Pre-Visualization), which included storyboarding and low quality animation to roughly time the film out
About 18/20 months in Animation/Rendering
FOR OTHERS HOPING TO MAKE A FILM!
This is more of a general mantra than a step-by-step. Production Pipeline is much better cataloged than what I can explain in this e-mail, but here’s a few rules of thumb that may be more helpful than the gritty process.
1) You need a story that you really want to tell. It helps if it comes from a personal feeling, because that will help drive the story and performance as you flesh your film out. It can also come from wanting to tell a series of gags or just having good times, but if you don’t care about the story it will fail and be painful to work on
2) You need to seek out and employ constructive critiques from others, inside and outside the film production. This is not about using other people’s ideas and make their version of your film, but rather taking their input to improve your work. Sometimes you need to instead take the spirit of a critique when making changes, but people are perceptive and pick up on problems that you’ll be too close to see.
3) Do as much planning in the early stages as you can, it will pay off tenfold down the road. Sometimes you’ll have to destroy an entire storyboard sequence and build it up again to do it right, but if it’s gotten deep into the animation stage already it will probably be too late to economically fix and meet deadlines.
4) Communicate with your team. So many students and (bad) professionals alike forget to do this, and it is key on getting stuff done. If you’re making a change that affects someone else, don’t leave them out of the discussion if you can help it.
5) Love it! If you love what you’re doing, you’ll be able to stick to it. Finding even the smallest thing to get excited about in a film or a scene can help carry you through the tough times.
From Peter Mo:
As Lighting Supervisor on Driven I was responsible for ensuring consistency and maintaining a quality standard for the lighting department. Lighting is at the tail-end of the 3D production process (Composting and Video Editing come after, but they deal with 2D), so lighters often run into problems that go unnoticed through the 3D pipeline. Render crashes due to Maya nodes created during production, problems with topology or object placement or animation that only appear when you see how they interact with light, crashes and loading issues from referencing other scenes are just a few examples.
Troubleshooting was a big part of my responsibility because technical problems, ranging from little nuisances to show-stoppers, would arise on a regular basis. A lot of my early work was assessing what we could do with our available resources in terms of computing power, people-power, and streamlining things as much as possible.
We used Autodesk Maya 2013 and Renderman for lighting. Renderman has advantages over Mental Ray in a 3D animation pipeline: fast and high-quality motion blur, fast displacement rendering, and Renderman’s Deep Shadow system. Mental Ray’s raytracing capabilities are better, but we would use reflection mapping to fake glossy reflections.
We also used camera-projected textures in the 3D scene to better control the look and style. We rendered all frames in 32-bit/channel OpenEXR image format, which allowed us a lot more flexibility in color correction without worry of color banding. We rendered out many different passes per frame to allow us to adjust different lighting elements independently, such as diffuse, specular, reflectivity, and more, before combining them together.
Unlike the two previous projects in which I was working with students who had taken lighting class, I was working with a team that had little or no prior lighting experience. Lighting and rendering took place over 2 semesters, including a lot of training in the beginning. Even after lighting was mainly complete, re-rendering of certain things went on until the very end if changes were needed or if a problem could not be fixed in compositing.
We used render presets and light rigs as a way to keep things consistent across the shots at different times of day. We had a pre-dawn and sunrise setup for Acts 1 and 3 and an afternoon setup for the flashback portion in Act 2. The light rigs were updated and improved as needed and everyone would reference one into their scene to use as the primary light sources, for moon, sun, and sky lighting. Additional lights for characters were added on a per-shot basis and setups that lighters create that worked well were shared for others to use when appropriate.
For compositing, we used Eyeon Fusion 6. It is a powerful node-based compositing program which allowed us to quickly change or fix visual elements which would take much longer to do on the rendering side. Making certain parts of the composition modular and reusing them in each other’s scenes reduced the amount of redundant work we’d need to initially perform in order to build up a composite from scratch.
Useful effects and techniques that individual compositors came up with were also made modular, such as color correction nodes for shots that had been approved, or a heat-distortion effect that worked well. All monitors used for compositing were color-calibrated to ensure the closest possible image when viewed on any of those monitors. In additional to traditional 2D compositing techniques such as color correction, rotoscoping masks, and paint fixes, we also incorporated 3D techniques directly in Fusion.
To save on render-times for a lot of the vegetation in the environments, we pre-rendered various sprites, generated point clouds of their locations, and then imported 3D cameras and the point clouds from Maya into Fusion. The vegetation sprites would be attached to points on the point cloud and rendered from the 3D camera and placed over the 2D shot, all in Fusion.
Compositing took about 2 semesters worth of work with a few dedicated compositors and a few more that were splitting time between compositing and other responsibilities. An additional month could be counted for training since none of the students had ever used Fusion before. We had a Digital Tutors account and students studied many of their Fusion lessons. I also gave some lessons based on my experience using Fusion on previous projects.
For the first time on any project, we used our own in-house render management software instead of commercial software. It was customized to our needs and the developers were very responsive to our suggestions for improvements and additional features. Commercial render management software we’ve used in the past was not reliable and we couldn’t get the type of support we needed when problems arose. It definitely helped us all maintain our sanity–without it we’d pretty much have to take shifts around the clock to babysit each render job, especially at crunch-time.
Thinking back over the events during the production of Driven, I admit I was concerned how everything was going to come together at the beginning; however, the technology we used ended up working well enough and seeing how far the initially inexperienced team had come by the end of the project was very satisfying. I’m very proud of all the students who had sacrificed so much of their time and energy to making the film the very best they could.
From Steven Chitwood:
Steven handled the VFX on the short, “All effects were done in Maya 2011, specifically. I used Maya fluids, particles, nParticles. Types of effects were fire, smoke, dust, explosions, and liquids. All effects were either rendered with Mental Ray or Renderman.” he says. Other programs used in the making of Driven included ” ‘Zbrush’ for 3D sculpting of characters and some environments, ‘Renderman for Maya’ (the Rendering engine used for the film), ‘Eyeon Fusion 6′ for Compositing, and finally ‘Mel’ and ‘Python’, for scripting.
To manage the team, a combination of verbal communication, along with email and other means were used to provide both official and unofficial ‘check in’ updates. “We used Google Docs for documentation including tasks for each departments, deadlines, and milestones. We did keep track of everyone’s hours and their tasks so we could accurately predict of where the project was going.” says Steven.
On the project pipeline, Steven said the following, “I was not in PX (Project X) during the beginning, I jumped in almost mid-way through but here’s my take. We first start off a pitch that Mike had and we discussed things of what did work and what didn’t for the story. Concurrently, we started create to concepts of the film while the modelers and animators were developing the layout of the film, also, the riggers were doing some RnD (Research and Development). Once some of the concepts were starting to be officially approved, modelers would start to make the final assets and create textures for them. Once assets, textures, and animations were done, those shots would be handed off to the lighters.
Lighters simply then light shots and render them and bring them to the next stage: compositing. Compositing is where we bring all the images together to make the final shots, making final tweaks to make the shots the way we want it. Keep in mind, when animators are done with shots and the assets are created, we also hand off those shots to the effects department (me).
There, we create the fire, smoke, dust, etc and then render those effects as well in separate images, just like what the lighters do. We then bring those also into the comp to finish the shots entirely. While we are doing this film, we are also doing an ongoing edit for the film. Towards the very last stages of the film, we edit the film and see what ever else changes/fixes we need to do.”
Lastly, in short the pipeline process is as follows “story->concept->look development->layout->modeling->rigging->animation->effects->lighting&rendering->compositing->final edit”, also “We decided to create our own render-farm. Our render-farm was used to expedite the rendering process.”
It’s very clear that a lot of work went into making the short film, everyone that worked on the project had a part in making it all possible. Fantastic work everyone!!
3D Animation Student, Internal Public Relations, Industry News Coverage, Blog Administrator/Writer