Beginner's Guide to Visual Effects

Behind-the-scenes of one of cinema's most valuable but esoteric fields.

“Once you’re in charge of a virtual shot, you are both the director and the DP for that world.”

Lights Film School recently had the opportunity to speak with NYC-based director and visual effects artist Perry Kroll, whose work has appeared in commercials, music videos, and short and feature films around the world.

In our interview, Perry pulls back the curtain on the world of VFX, granting a rare glimpse into its inner workings.

Read on for his perspective, but first check out his 2014 reel!

You can peruse Perry’s full portfolio here.

Hello, Perry! Thanks for taking the time to discuss your work. There’s a lot going on in your 2014 reel, but before we dive in, let’s wind back the clock – when and how did you first get into visual effects? Is it something you’ve always wanted to do? How did you pursue that passion?

Thanks! I started to dabble in VFX around the same time that I started to dabble in filmmaking. I was around 9 years old, and in the span of one year or so, I received a hand-me-down camcorder and what was at the time a state-of-the-art, souped up PowerMac 75/100. It would be a laughable machine now, but it was loaded with basically every high-end graphics and film-related software package in existence. I scoured the local libraries and read every software manual I could find — mostly ancient outdated versions from the 80s — and basically taught myself film editing, Photoshop, 3D modeling/lighting/shading, After Effects, etc.

Fast-forward to 2005: I attended NYU’s Tisch School of the Arts undergraduate film program with a focus on directing. I found myself drawing on my weird cobbled together VFX knowledge to give projects a little bit of an edge. I think I surprised at least one professor when I did an in-camera composite and transplanted an actor to the deck of a space station by double-exposing 16mm B&W film off of a graphic on my computer screen. We shot film and edited on a Steenbeck, so no one was expecting CGI.

image_0_16mm

That’s amazing! After graduating, how did you get the snowball rolling, so to speak? Many of Lights’ readers are working to “break in” to the industry, so we’re always curious to hear how an established professional found their way into the field and got their freelance business going.

I imagine you weren’t working with clients like Sony and Microsoft right off the bat?

I started freelancing in VFX during the last year of school. The boundaries between school and the professional world were always blurry. Students were making legit commercials and independent projects on the side, and it was a natural evolution. Friends kept asking me for small VFX fixes, and I gradually realized “I can probably get paid for this.”

I ramped up the quality of my work, and taught myself how to do much more advanced VFX jobs. As everyone graduated and began producing at a commercial level, I was still the go-to guy for a lot of them. The film world is all about your network, and your reliability. When you know someone, and you trust them, you refer people to them.

Well said. Would you say you’ve learned “on the job”? If so, how have you navigated the tension between developing your skills as a visual effects artist and delivering a project professionally, on time and within budget?
I’ve definitely learned on the job. Every job is different and I’ve probably made the biggest leaps in my work by saying “yes” to something I haven’t done before, and then going home and calling people, and researching the heck out of it.

I like to draw on my background as a director and bring a strong visual storytelling approach to VFX work. My approach is to fully understand the story we’re telling as well as the mood, tone, context, etc., and make sure that I’m strengthening that story wherever possible.

Delivering on time and on budget comes down to being very, very precise before the project starts. Consider every detail, plan for all possible moving parts, and make sure that no stone is unturned when evaluating options. Know who else I’m going to need to bring in, and what they will need.

VFX tends to seem like some kind of voodoo black box, where no one understands what happens inside of it. Talking with art, with the DP, with the editor and colorist, etc., and making sure everyone knows what I need and why I need it, can make or break the VFX. I like to approach a shoot knowing that no matter what changes come up during the shoot day, I know what I need to do to ensure that the VFX are still attainable.

Looking at your portfolio, Perry, I see one of your early post-Tisch projects was The Sheol Express, a short film we discussed last year here at Lights.

The first few shots of your 2014 reel are from this film, and what shots they are! Seeing the “process” – ie., seeing the images go from greenscreen to epic finished products – is truly eye-opening. How do you even begin to approach something as complex as this? What software do you use? What assets are involved? What are the steps?

For The Sheol Express, we spent at least a year just meeting with the directors and department heads as often as possible to try to catch every single hiccup before it was too late. Every department had to be friends to make that movie work.

We started with really detailed sketches for the big matte paintings. We worked with my brother, Bryce, who is an incredible artist, both in traditional mediums and digital. We identified which pieces of these worlds needed to be built, and what could be CG.

Generally, things the actors touch (floor, benches, railings, etc.) are better off existing in real life so the actors have something to interact with and also so that we can capture the interplay of light and shadow in the real world, instead of trying to fake that later.

Bryce has a history of designing these really incredible Photoshop matte paintings, so I roped him into working with me to build the CG worlds. We went out in NYC and shot thousands of reference photos of old brick buildings and bits of architecture. We needed to shoot from a height that roughly resembled the elevated train platform from the movie, so we ended up at the Highline park (built on an old elevated train track) shooting the buildings that surround it.

Bryce took these images and started mashing them together on top of his original sketch. He ended up with a Photoshop document that is about 900 layers. Buildings in the final matte painting are made from dozens of individual images: maybe a roof from one place, a wall from another, a different wall from yet another photo, a tin smokestack from somewhere else, etc.

Meanwhile, I flew to Boulder, CO, and spent a week with one of the directors, Ryan Patch, shooting custom VFX assets. We spent one exceptionally cold morning shooting plates of steam rising from factory chimneys against an easy-to-key clear sky. (I will neither confirm nor deny that this eventually led to us sneaking on to the property of a large beverage manufacturer with a tripod and a RED camera.)

We built a home-made dry ice fogger (still wondering which watch lists you go on for buying a liter of baby oil, 15′ of plastic tubing, and a bunch of 5 gallon buckets at the same time), and set fires where perhaps we should not have set fires.

Eventually this all comes together in After Effects (which meshes fantastically with our huge Photoshop documents) and we bring it to life. Tons of atmospheric depth from haze and mist, stock assets for ocean waves crashing on cliffs, our home-made smoke and steam plates, some falling ash that we shot with flour, color, and we’re done. The renders took upwards of 20 hours a piece on some shots.

It’s incredible to think that so many assets go into the creation of a single image! You mentioned that you spent a lot of time meeting with the directors for The Sheol Express, Perry – generally speaking, can you describe your relationship to a film’s director? What is that collaboration like? At what point are you looped into the creative process as a visual effects artist? How much input do you have?

The best scenario is that I am looped into the VFX process of a production very early. I can help narrow down options for VFX that will shape the entire production. Seemingly arbitrary details make the difference between easy and incredibly difficult VFX shots.

I work very closely with the director in pre-production to help refine the vision and communicate our needs with the other departments. On set, the VFX department tends to sort of break the hierarchy. Director, AD, Camera, etc., all know they need to let the VFX team do their thing or else the shot falls apart in post, so at that point we tend to take over a little bit, with the director’s blessing, and perform our dark arts.

I love the ending of your 2014 reel – the rocket launch and music cutout are shocking, and the car careening through the alleyway is very Cloverfield-esque. I discovered an extended version of this sequence in your portfolio. How did you create it…? If you don’t mind my asking, what was the budget for this? It reads like a Hollywood action film!

Cloverfield was definitely an inspiration. This was a fun project, because the director was very open to shaping things around what we could and could not achieve on a small scale. I doubt we spent more than a few thousand dollars on each piece.

For the car crash, we started with a still photo, and fabricated almost everything.

We had an alley location and we set up a DSLR on a tripod overlooking the alley. I wanted to re-light the scene later so we took about 50 or so different exposures, ranging from way under-exposed to way over, so we’d have material to light the dark places in the shot with.

The still image at normal exposure became our primary plate for this shot. No moving footage was used.

I measured the alley with a laser measure and took a 360° spherical environment map (just a ton of wide-angle shots that we stitched together) to give the 3d models something to reflect.

With those measurements we were able build an approximation of the alley geometry in 3D, and match it up to our still image plate by viewing it through a 3D camera with the same lens specs as the DSLR.

We purchased stock 3D models for the cars, and set up a couple of them down at the end of the 3D alley, as if they’d just crashed into each other. We lit them with flickering light to simulate fire.

We used a rigid body physics simulation for the taxi. If you look closely, it doesn’t break or bend or anything, which would have been really nice, but we just didn’t have the time or budget on this project to achieve that. We basically used a physics simulation to toss the car into the alley from off-screen, and let it interact with the geometry of the walls.

image_6_alleyphysics

I rigged the car with lights for the headlights, and rendered out a lighting only pass that just shows the light interacting with our placeholder alley walls.

By combining that lighting pass with some of the over-exposed photographs of the alley, we were able to actually re-light the alley and have our car headlights rake across the wall and ground. I used the same technique to cast flickering fire light from the burning cars at the end of the alley.


We also lit the cars to match the streetlights and other incident lighting on location, and rendered out shadow passes so that the cars could cast realistic shadows on the environment.

image_8_alleyshadows

From this point, it was simply a matter of combining the plates and render passes, and adding generous amounts of stock dust, sparks, smoke, and falling debris to help sell the impacts. We used a bunch of the Cloverfield secret sauce of camera shake, grain and defocus to give the shot some motion and make it feel grittier. Add some fantastic sound design by Robin Arnott, and you have a pretty epic movie moment from, basically, a still image.

Incredible. I also discovered excerpts from Screen Trek. The whole time I was wondering: what’s “real” – ie., shot during production – and what’s added in post?

Can you tell us a bit about how the director and you conceived and executed this project? What software did you use? How many and what kinds of layers and assets were involved? It looks like the sort of thing a big team with considerable resources would deliver!

For Screen Trek, we knew going into this that we had to create a big feeling with a very small team. The director, Matt Lincoln, is also an accomplished VFX artist, and we split the work between us. He handled all of the INTs and EXT planet locations, and I handled the space battle.

Traditionally something like this is done in high-end 3D software such as Maya and everything is built from scratch. We couldn’t afford a team of specialists, and we only had one month. The solution came in the form of Element, from Video Copilot.

Element is an incredible achievement in software, created by Andrew Kramer (who does all of the title design for JJ Abrams, including the epic Star Trek end credits on Into Darkness). Element runs inside of Adobe After Effects, and gives us the ability to import 3D models, light them, animate with them, and render them in almost real time. Previews and most working are real-time, and final renders usually take somewhere between 5 minutes and an hour.

Apart from blazing-fast previews and renders, the true genius of Element is that it lets us combine the 3D work with the composite pass and do both at once. Normally, it’s a bit of a back and forth. The 3D team does a first pass on animation and renders out a draft quality version. The compositors start layering that in with atmospheric effects and backgrounds and matte paintings, and then the 3D team does another pass. And so on. If anything changes (camera move, or the path that a spaceship takes, for example), both teams have to adjust their work. And then new renders are made, and the composite has to be updated. It’s a very slow and convoluted workflow.

Element is rendering from a 3D world to a 2D layer in our layer stack right in After Effects, and we can start building the composite right there around the 3D render. We can animate the 3D camera in After Effects and place 3D lights, and Element renders that right out into layers for us. We get depth passes, illumination passes, whatever we need to start building a very nice composite. We can do glow off of porthole windows, focus blur, motion blur, etc., and easily adjust in real time with no need to go back to a 3D program and render out again.

This lets us rapidly prototype shots and animation paths, and instantly see how these things look in context with the lights, the laser blasts, the stars, planets beneath, etc. We were able to easily get a sense of the true aesthetic of a shot, and say, “Hey, let’s move the camera over here so we get all of this gorgeous negative space from those shadows, and then these missile trails show up backlit against the stars”, and just sort of play around, and try things out to find the best looks and the best moments. It’s very organic and, I think, can lead to some very pleasant moments of visual creativity that might not happen in a more rigid workflow.

For the ships, we purchased stock 3D models, and then did a very detailed pass on enhancing and embellishing the texture maps. One of Element’s limitations is that it doesn’t do global illumination and you can’t have light bouncing off of surfaces, so we took the ship models into Blender and Cinema 4D and “baked” shadow maps. Which means we basically rendered light/shadows and baked those shadows into the texture map for the ship, so that in Element it was already there on the surface of the model, and wasn’t something that Element had to calculate for us.

I also used photos of lights shining against flat surfaces to create a lot of little tiny lights on the ships, like landing bay lights and that kind of thing, to give it a sense of scale. These were also baked into the texture map, to save us from having to try to create them in Element, with its somewhat limited lighting/rendering package. Those are some of the ways you can work in a real-time system like Element and shift some of the more burdensome aspects of the rendering and look to another tool.

We used Trapcode Particular for the laser beams. I wrote some After Effects expressions that let me move a null layer around, and whenever it moved to a new spot, that automatically triggered our Particular system to send a pulse of particles toward that spot. So I could just go through the scene and target precise laser hits on the ship wherever I wanted them with a couple of key frames.

We used another wonderful plugin from Video Copilot called Optical Flares for a lot of stuff, including laser impacts and missile glows, and to just sweeten and enhance explosions and stuff.

We generated debris clouds in Element with the particle mode, where you can load up a bunch of 3D models and tell it to make hundreds or thousands of duplicates and arrange them in space.

Explosions were made by combining a lot of stock plates and just playing with lighting and glow to make them feel like they were there in the scene and interacting with the ships.

We did a few poor hapless ensigns being thrown out of the ship by just dressing up the film crew in uniforms and jumping in front of a green screen in the director’s living room and shooting it at 240 fps.

I am continually blown away (no pun intended) by the ingenuity, detail, and workflow involved.

When there is a budget for a big team, what roles are there to fill? How does a VFX team, large or small, share and manage files for a single project?

With a big team, you can start to spread out. One of the first luxuries is to centralize roto-work and tracking work. If you have dedicated roto artists, you can send shots to them and know you’ll have gorgeous high quality outlines for cutting things out coming back to you in (x) days.

On a heavily 3D project, you’ll have 3D modelers who build the models, then you’ll have texture artists who create the surface maps and shaders, and you’ll have riggers to rig the bones or the pieces for animation, and animators, and even dedicated render people who manage the pipeline.

There are specialists for liquid simulations and smoke simulations. Matte paintings are often done by a small team unto themselves.

The compositors are the ones who bring all of these separate passes together and make sure the layers mesh and blend and feel natural and like they are in the same world as one another. That means colors and exposure match, and atmospheric depth, blurring, shadows, glows, etc. One of the most crucial aspects of this stage is light wrap – the quality of light bleeding around the edge of an object and slightly coloring the foreground.

It takes a lot of organization to keep track of all the assets and files on a small project. On a big project, a dedicated post supervisor or post producer will often help, and there is usually a lead VFX or VFX supervisor in charge of delegating to the various departments and overseeing their work.

Talk about teamwork!

I want to shift gears here and congratulate you on the success of your work for the Sony Bravia spot “The Kid” – India’s most watched TV commercial! Considering the client, I imagine a lot of planning happened before you fired up After Effects. Can you talk a bit about how you put this piece together? I’d also love to hear how you handled workflow for such a high profile client.

We planned this very meticulously, not only because of the scale of the shoot, but also because we needed sign offs and approvals from a lot of people. We were working as a small team with the director, Corydon Wagner, but several layers of creative agencies and companies in India, London, and Spain were involved, and ultimately Sony in Japan. We needed to be very sure of what we planned to make, and how it would look.

We commissioned my brother Bryce once again, and asked for his help mocking up the colored tree so we could make sure everyone was on the same page.

image_14_sonymockup

This mockup was used to facilitate the construction of an actual colored tree on the side of a lake in the mountains near Barcelona. It was really great to arrive in Spain and see Bryce’s mockup pinned up for reference next to the real tree.

image_15_sonysidebyside

The art team was able to take the tree about 60% of the way there. They had a great base, with a gorgeous trunk and tons of close up detail, and they spent a few days airbrushing the colors onto the leaves.

We needed to make the tree bigger, more colorful and more vibrant, and that’s where VFX came in.

On set, we shot a variety of plates, including some incredible lighting schemes and some beautiful fog and light-ray versions.

We also shot a lot of colored branches both in Spain and back in NYC after the shoot.

Once it all came together, with some enhancements to the background and night ambiance, I think we managed to capture the magic and beauty of Sony’s brand and the vibrancy of a celebration in India.

We also did a lot of falling leaves in 3D, once again using Element.

And we spent a lot of time working on a natural organic way to make the leaves glow. We based it on bioluminescence in nature, and it came down to shooting painted leaves in a studio, and then manipulating the vein structure to create a sort of internal glow.

This was a small core crew in post, consisting of myself, Matt Lincoln and my brother operating out of a makeshift HQ, and the amazing Vladimir Kucherov handling color grading and online from his studio, so we had a lot of fun. Which may or may not have included a special late night VFX cut featuring Vlad’s cat Bonapart.

Haha. More generally, Perry, do you have a guiding philosophy when it comes to visual effects? In your opinion, what distinguishes “good” visual effects work from “bad”? Are there ever situations in which you feel visual effects should not be used?

Absolutely! The first thing I ask whenever people bring me on to talk about a VFX project is, “Can you do this without VFX? Is there some way to do this practically?”

I hate to use VFX when some sort of practical solution would look better, be it models, miniatures, rear projection, etc. Often the true answer is a melding of both. Wire work on set, with CG to paint out the wires, so you still have interaction between the actors and the set, instead of putting them in a green screen studio. Or using as much practical lighting as we can to sell an effect, so it feels motivated and like it’s part of the shot.

I like to try to base as much of my work in the real world as possible. I always try to source textures and elements from real world things. I’ll go out and shoot distressed walls, or real world light artifacts and I’ll incorporate as much of that as I can.

Some of my favorite VFX jobs in big movies come from teams who were keenly aware of the limitations they faced. The Matrix holds up insanely well for the most part, because the bullet-time and the stunts were done as practically as possible, with VFX serving only to accent and hide the seams. The Lord of the Rings has all of these phenomenal “bigiatures” and they used VFX sparingly to blend the live action into these massive miniature sets they built, and the result is seamless and utterly captivating.

For Oblivion they decided not to use green screen for the sky tower. They wanted to preserve all of the beautiful reflections and transparency you get with real glass and metal, and so they surrounded the set in a massive projection screen and they used high-res footage from mountaintops and just dialed in whatever sky and weather they wanted. The end result is stunning, and looks so much better than a fully VFX version.

What sort of equipment do you need in order to do the work you do?

It’s all about being able to move lots of data around.

You want to have a big internal RAID array to be able to store lots of media and stream it from the disk quickly. I use RAID 0 for pure speed, but you need to make sure you have backups, because RAID 0 has no data redundancy. There are systems that let you have redundancy and protection without much compromise in speed, but they tend to be more expensive. The holy grail is of course to RAID a bunch of SSDs together. I do run my apps and project files and smaller assets from an SSD.

You need a crapload of RAM. There is no such thing as enough RAM to get through renders on projects with big texture maps. I find I’m doubling my RAM every two years.

Pure CPU speed and number of cores are less important, because on complex VFX projects, you bottle-neck way before you hit CPU limits. Disk read/write and RAM limits are the bigger culprits. The more cores you throw at a render, the more your drives will choke and eventually it just slows it all down cause the cores can’t get the frames they need from disk quickly enough.

And, with something like Element, it’s running entirely on the GPU. I think for the Star Trek Into Darkness end titles made in Element, they used a bunch of Titan GPUs together.

This all sounds awesome, albeit super involved! Any words of advice for beginning filmmakers interested in learning visual effects for the sake of their productions? How about for directors collaborating with VFX artists? And for aspiring VFX artists themselves?

To filmmakers looking to incorporate VFX: find a good VFX artist early on and talk about everything. You can save yourself a lot of wasted time and money, and end up with a much better product if you troubleshoot from the very beginning. Try to understand why some things are easier than others. Learn as much as you can about greenscreen work, and how 2D and 3D tracking works, why some things are better locked off, and what goes into tracking a moving camera shot, what requires roto and what doesn’t, etc. Ask your VFX artist to walk you through this stuff. The more well-versed you are, the better your decision-making abilities.

To those looking to learn VFX: study real life as much as you can. Pay attention to how light hits things, and what real world interactions look like. A person standing in front of a lake? Pretty boring until you’re trying to replicate that from a green screen shoot and it looks totally fake.

One of the best secret resources for a beginning VFX artist is the wealth of tutorials on VideoCopilot.net. If you watch even a handful of those, you will learn so much.

Learn as much as you can about the crafts of directing and cinematography because once you’re in charge of a virtual shot, you are both the director and the DP for that world.

This is truly amazing, Perry. You’ve given us an incredible glimpse into a complex world. I feel I understand visual effects much better now than before, and on behalf of our readers, I thank you for sharing your time and expertise!

For contact, or portfolio, visit perrykroll.com.

 Michael Koehler, with


Ready to learn more technical wizardry for the screen?

Then we cordially invite you to join our online film school, complete with a comprehensive filmmaking course. It’s everything you need to learn how to create professional narrative and documentary films using the equipment you already have, wherever you live, with guidance, community, and resources at a fraction of the cost of traditional film school.

MORE FROM US:

Pin It on Pinterest

Share This