Date published 

 

download a PDF version of The Core Skills of VFX and. The VFX Core Skills aracer.mobi scott squires video tutorials on rotoscoping technique. Are you interested in 3D, animation or visual effects for film and TV? . In this introductory course, students will learn the basics of animation. The Visual Effects Society (VES) is a nonprofit professional, honorary The VES Handbook of VFX has been written by artists and technologists who have.

Author:RHONA EVERTS
Language:English, Spanish, Dutch
Country:Korea South
Genre:Fiction & Literature
Pages:353
Published (Last):27.05.2016
ISBN:816-9-65232-280-6
Distribution:Free* [*Registration needed]
Uploaded by: SOILA

45302 downloads 181006 Views 18.75MB PDF Size Report


Vfx Tutorials Pdf

Digital Animation & Visual Effects School, Amir Rubin, Anna Vittone,. Christian von .. has read a training book, watched a video training tutorial, or sat through a. Film students must be familiar with the basics of visual effects if they want to succeed in contemporary filmmaking—understanding the process and language of. Download mInstaller from our website aracer.mobi and drag & drop it to the Applications folder. After the installation you can find it in the menu bar.

Skip to main content. Log In Sign Up. Filmmaker's Guide to Visual Effects. Erim Kutsal. DOI In contemporary filmmaking and television production, visual effects are used extensively in a wide variety of genres and formats to contribute to visual storytelling, help deal with production limitations, and reduce budget costs. Yet for many directors, producers, editors, and cinematographers, visual effects remain an often-misunderstood aspect of media production. The book will help readers: All rights reserved.

After Effects tutorials | Learn how to use After Effects

VFX as a Filmmaking Tool An overview of the benefits of using VFX in production, and a discussion of the most common types of visual effects, from fix-it shots to set extensions. Chapter 3: From 2D to 3D This chapter provides a study of camera movement, parallax and perspective shift, and explains 2D, 3D, and 2. Chapter 4: Separation Separation is a vital process in VFX, and its success affects not only the end result but also time and cost.

This chapter provides a close look at the two methods of separation: Chapter 5: Workflow Case Studies By analyzing the methodologies of four different shots, we see how the various VFX crafts discussed in the previous chapter are combined into different workflows, and how these workflows affect schedule and budget. Chapter 7: Pre-production Planning, budgeting and scheduling visual effects are vital steps during pre-production. This chapter offers advice on choosing a VFX supervisor and producer, creating breakdowns, budgeting, the importance of tech scouts, VFX meetings, and more.

Chapter 8: On Set This chapter covers essential practical on-set procedures and provides tips for successfully shooting VFX elements, setting up green screens, acquiring on-set data and reference, crowd tiling, and working with special effects. Chapter 9: Chapter In this concluding chapter, we look at some emerging technologies and trends that may affect the way we work with visual effects, like lightfield cameras, super-black materials, Virtual Reality, and real-time rendering.

Historically it made sense. In the pre-digital days and before visual effects were primarily computerized, most effects work was done in-camera and on location, using miniatures, practical techniques, and various camera and optical tricks.

But things are different today, and these two terms are used to describe two distinct and very different crafts. Special effects SFX are practical, real-life effects performed on the set and captured by the camera.

Visual effects VFX are digital manipulations and enhancements of the footage, and happen primarily during post-production. The knowledge, techniques, and skillsets used in each craft are widely different. Creating a practical explosion on set requires experience in explosives and pyrotechnics, while creating the VFX equivalent of that explosion calls for a mastery of computer graphics Special effects are often shot as element plates for visual effects.

In this example from Boardwalk Empire, a burning tree stump is used to augment a World War 1 shot. Visual effects by Brainstorm Digital. The two crafts often complement and sometime contradict each other. This relationship, and how it translates into practical decisions on set, will be discussed in Chapter 8. On the other hand, a ship real or miniature that was shot with a camera as an element for visual effects is not CG. A group of cheering spectators shot on green screen to be used for crowd tiling in a shot is not CG, but a group of animated digital characters is.

Despite the popular use of the term, not all visual effects are CGI. In fact, many types of VFX shots do not need any CG at all, and are done solely by manipulating the footage or combining it with additional footage or still photos. The distinction is therefore important because CG indicates a different usually more complex and expensive process than working with photographed elements. The decision on whether to use footage or CG depends on a variety of factors, and stands at the very foundation of VFX production.

It will therefore be discussed throughout the book—and in more than one context. An example for a mixed use of photographic and CG elements in one shot. First an element of a rowboat with soldiers is shot in a lake.

This is of course real footage. Now the large warships are added. These are CG models—built and textured from scratch. There is certainly a lot of number crunching under the hood, probably more than in any other film-related craft.

But all this technology is useless without the creative, resourceful, and highly skilled craftspeople who possess the knowledge and expertise to drive it. The environment, the actors, the set, and the props are all three-dimensional entities that have depth and are spatially positioned at varying distances from the camera. But the moment the action is captured by the camera, this three-dimensional world is flattened into a two-dimensional image. This is a point of no return.

From now on and forever after the movie will be in 2D. All visual effects are essentially performed on a two-dimensional source. This is a fundamental notion that is so easily overlooked—there really is no depth to work with, as the footage is always two-dimensional.

Although a sizable portion of all VFX work is done in 2D, it is a limited workflow. It is very hard to realistically convey changes in perspective and distance solely in 2D. Much of the work we see in movies would not have been possible without the ability to create and animate three-dimensional CG elements within a virtual three-dimensional space. So when we talk about 3D, we really talk about the process—the end result is still two-dimensional. The differences between the 2D and 3D workflows, and the pluses and minuses of each will discussed in Chapter 3 along with an in-depth analysis of camera movement and how it affects the choice of workflow.

This process creates an illusion of depth by sending slightly different images to each eye picked up separately with the help of special glasses. Stereoscopic imagery has been around, in one form or another, since the midth century. Contemporary stereoscopic films are either shot with a stereoscopic camera rig, or converted to stereo in post. The latter obviously inferior option requires an entire army of roto artists see Chapter 4 that painstakingly separate foreground, mid-ground, and background elements.

Four views showing different angles of a 3D cedar tree model. Model courtesy of Speedtree. Whenever 3D is mentioned in this book, it refers to the process of creating and animating CG elements in a virtual 3D space and rendering them as 2D images that are composited into the footage.

This should not to be confused with stereoscopy. This is a vital part of the discourse surrounding VFX, and for a good reason—few filmmakers will be content with VFX that damage the believability and credibility of their movie.

Yet there is indeed something that distinguishes VFX in this context. That something needs a more accurate word. That word is photorealism. In that scene, the dinosaur is lying on the ground, sick, and breathing heavily. The filmmakers have two choices: Now, both the animatronic and the CG Triceratops are just fake props that pretend to be the real thing.

Yet there is a key difference: The camera captures the animatronic as an integral part of the environment, complete with all the intricate interaction between surfaces, lights, atmosphere, reflections and refractions, dust and smoke. This makes our animatronic, by definition, photoreal.

The fact that it is just a fake puppet does not matter in this regard. At the moment the picture was taken, it was there, in front of the lens. The CG dinosaur, on the other hand, will never be captured by the camera. It will not be part of that physical interaction of light, matter, and optics.

In other words, anything captured by the camera is, by default, photoreal. Visual effects are not. This is why VFX artists constantly strive to make their work as photoreal as possible. Integration is a paramount concept in VFX. The believability of both fake props depends on the choice of correct materials, accurate detail, and—most of all—realistic movement.

But the overall realism is fully achieved thanks to the meticulous construction and texturing of the animatronic.

Visual Effects

There is very little room for stylized animation in an environment that has real-life actors, which accounts for the prevailing use of motion capture in visual effects see Chapter 5. To sum it up, we can say that realism in VFX is three-leveled. On the first level, elements need to be built and surfaced with the right amount of accurate detail, just like any set piece or prop. On the second level, they need to move in a plausible fashion.

On the third level, they need to look photoreal— integrated into the footage as if they were physically captured by the camera. The success of these three levels of realism depends on the work of artists who handle a wide variety of tasks—from modeling and texturing through animation to lighting and compositing. In Chapter 5 we will take a close, detailed look at each one of these tasks or crafts, as I prefer to call them , and see how they contribute to the overall process of visual effects.

The Danger of Over-indulgence The believability of visual effects does not depend just on the quality of animation, detail, and integration. Visual effects are not bound by any physical limitation. They are, in a sense, limitless—a magic playground filled with fascinating toys for the filmmaker to play with.

And herein lies the danger. When visual effects are used judiciously and with respect to real-world physics and optics and to the aesthetic conventions of filmmaking , they become an integral and coherent part of the movie and its story.

Rather, we argued, it will cause the VFX shots to pop out in the context of the sequence, creating a dissonance that screams: It is of course the responsibility of the VFX team to create the perfect magic, but it is also the responsibility of the filmmaker not to over-indulge in the use of this magic. First, to clarify, animation is of course an integral part of the processes of visual effects and games production.

Yet the overall look is stylized, and the focus is on expression, mood, and aesthetics rather than realism. But the limitations of real-time playability still require compromises—noticeable here in the foreground rocks, the broken windshield, and the plants.

The convention in this genre is that everything is CG, from the characters to their surroundings, and no live footage is used. This allows for a certain unity and coherence of the entire environment, since everything is created on the computer and rendered together. Take, for example, a Pixar classic like Finding Nemo: Now take Nemo and his surroundings, and drop them into real underwater footage, and they will look and feel noticeably artificial. In this respect, visual effects differ significantly from animation.

As previously mentioned, adding CG elements to live action footage dictates a strict adherence to photorealism and realistic movement. This shifts much of the focus from the design of enticing CG characters and environments and the use of stylized movement, to the integration of CG elements with live action footage and the heavy use of motion capture for realistic movement.

Video games are similar to animation in the sense that they are a complete virtual creation rather than a mix of live footage and CG. But the process of creating a game presents two additional challenges. The second challenge for games is that they must be rendered in real time, and while the capabilities of graphics cards are constantly on the rise and the level of realism in games is improving tremendously, game creators still face strict limitations on the amount of detail they can put in.

Visual effects do not abide by this rule. With no real-time rendering limitations, shots can be extremely complex and detailed, which of course facilitates much higher levels of photorealism and visual quality than games in Chapter 10, I look at the not-so-distant prospect of real-time rendering at full VFX quality.

Digital technology truly affected almost every aspect of our lives, for good or for worst. VFX as we know today could not have been possible without the ability to convert images into a series of numbers, and the development of computers that could handle and manipulate these numbers and spit out virtual imagery.

Before digital technologies took center stage, visual effects were achieved through a combination of on-set and in-camera practical work, animatronics, stop-motion, painted glass panels, optical tricks, and chemical film manipulations. Yet the dinosaurs that roamed Jurassic Park could not have been created, animated, and rendered without the emerging digital technologies and 3D applications and, of course, the brave and talented team at ILM.

Digital vs. Film Not so long ago, video cameras were practically a taboo in cinema, and were used almost exclusively in the TV domain. But the past few years have seen giant leaps in the development of digital cameras, which lead to a massive shift toward digital cinematography. Subsequently, the use of film media has seen a very sharp decline.

There are still some cinematographers and directors who will only shoot film, but they are facing an increasingly challenging situation, as many film stocks are becoming hard to find and fewer labs are now equipped to process film. There is indeed a certain quality and feel to film that is hard to replicate with even the best digital cameras.

Since visual effects are done digitally, the film stock must first be converted to digital. This is done by scanning the film—a delicate process that, if performed incorrectly or with badly calibrated scanners, may produce wobbly, inconsistent imagery. The fact that fewer labs now have the right equipment or experienced personnel to handle film scanning makes this issue even more problematic. So, if you want to shoot film and plan to use VFX, do make sure that the scanning and processing is done by a trustable lab and experienced technicians.

Otherwise, shooting with high quality digital cameras ensures a smoother VFX process down the line. Film vs. Television Film and television used to be miles apart in terms of visual quality. VHS was simply not comparable to film, and since the media was inherently limited as was the budget , fewer resources were spent on the visual side of TV productions. Consequently, visual effects in TV programs were used sparsely, and their quality was considerably lower than VFX created for film.

Those days are gone. Television is now on par with movies in terms of quality of content some say even better, at least in the USA. And as far as visual quality goes, the gap between TV and film has narrowed down considerably. When we worked on Boardwalk Empire, we were treating it as a feature film, applying the same high standards to the VFX work as if we were working on a movie and interestingly, Boardwalk Empire was actually shot on film.

There are differences however. Budgets for TV productions are still much lower, on average, than film, and the schedule is usually much tighter. This forces VFX teams to come up with creative and technical solutions to compensate for the limited resources and time. This is not necessarily a bad thing—the experience gained on TV productions can be used for low budget features as well. But besides the budget and schedule differences, working with VFX on film and television today is very much the same.

These are much shorter formats than feature film or TV series, and as such have a much shorter turnaround time. They also differ in their approach: Yet despite these differences, commercials and music videos share the same VFX concepts and workflows as film and TV, and much of this book applies to these fields as well. But nothing prepared us for the massive response the reel received the moment it went online.

We were truly taken by surprise. We were proud of our work, of course, but never thought it was particularly spectacular or groundbreaking. Only after reading through the comments did we realize that the reel had become so popular because it revealed visual effects in a movie where no one expected to see them, and in shots that no one suspected were VFX shots.

A wedding shot in New York was transformed into a Caribbean beach; a Brooklyn tennis court was converted into a desert prison; and actors shot on a green screen stage were transported to a pier in Italy. The visual effects were hidden, never trying to grab attention for themselves. They were merely used as a tool to achieve a simple goal: It was a clever decision by Scorsese and production VFX supervisor Robert Legato to harness the power of visual effects, not for creating the extraordinary but for helping out with the ordinary.

If you look at visual effects simply as a filmmaking tool, they can be used in a wide range of scenarios to help the filmmaking process. Budget is one area where VFX can make a big impact. The ability to change, modify, or enhance locations in post helps reduce costs, time, and bureaucratic complications. The ability to extend practical sets with VFX means that less time and less money are spent on constructing large sets.

And crowd-tiling techniques make it possible to populate big scenes with just a limited number of extras. Sure, visual effects cost money too. Sometimes a lot. But careful planning and a good understanding of the medium can keep VFX costs considerably lower compared to real-world practical solutions. Removing an unwanted street sign on location, for instance, can easily cost several thousand dollars for labor and municipal permits, while the VFX removal may be much cheaper, and without the legal and bureaucratic hassle.

But cost saving is certainly not the only reason to use VFX. Some ideas are simply not possible to achieve practically, no matter how big the budget is. A massive tsunami destroying Manhattan, a dinosaur stampede or an alien invasion—these are ubiquitous examples of how VFX are used in big-budget blockbusters to create the impossible and astound the audience with mega disasters and incredible stunts.

But there are also more mundane instances of production cul-de-sacs that can be elegantly solved with VFX. To have an actor participate as a team member in a real NBA game is something that no amount of money can facilitate. So to make this possible, production shot an actual Knicks game at Madison Square Garden, and we later replaced the face and hair of one of the Knicks players Steve Novak with a CG animated face replica of the actor we also changed his Jersey number from 16 to Not a spectacular wow-inducing effect, but a crucial step in keeping the storyline and making the impossible happen.

Cost saving, storytelling, practical problem solving—all these are reasons important enough to use visual effects in almost any project.

But one cannot and should not ignore the essence of VFX as cinematic magic. Visual effects are, after all, a tool that can lift certain moments in the movie up high. Sometimes, a single, well thought-of touch of visual effects can be more effective than sequence after sequence of VFX shots—a cinematic icing on the cake that can make a strong impact without draining the budget. There are as many types of visual effects as there are movies, directors, and visually creative minds.

In the old days, when visual effects were too limited or too expensive, solutions usually involved switching to a different take, or cutting around mistakes or problematic shots. The techniques of VFX allow for seamless removal, cleanup or modification of elements in the footage, and when done well, the removal is invisible, leaves no traces, and does not affect the rest of the shot.

Common fix-it work includes: There are of course many other fix-it scenarios. The most common removal method is creating a seamless patch that covers the area of removal and is tightly tracked to it. That patch is made to look just like the original background, only without the element that needs to be removed this process is often called clean-plating. In other words, VFX are always about adding something to the footage, even when the purpose is to remove something else.

Such work can be simple and straightforward, or complicated and time-consuming. In Fading Gigolo, we had to fix a shot where the two actors were riding wooden horses on an old merry-go-round. The actors were having a long conversation the shot was a very long one-take as the carousel was spinning the camera was mounted on the carousel. Unfortunately, the boom was visible just above the actors throughout the entire shot.

Boom removals are usually fairly simple cleanup VFX, but in this case the background was constantly spinning and changing, so in order to remove the boom and add back the missing background behind it, we had to recreate and track-in long patches of background that seamlessly tie in to the footage.

The work on this shot took around two weeks to complete, an unusually long time for a boom removal shot. But perhaps the most complicated fix-it scenario I ever had to deal with was the prison shot in The Wolf of Wall Street. The first half of the shot was a big crane pull back move, starting from Leonardo DiCaprio picking up a tennis ball and then revealing the tennis court and players.

Originally, we were supposed to extend that move further back and up to reveal the prison surroundings around the tennis court The original crane pull-back footage was shot in a small park in Brooklyn. Retiming the footage was not, of course, a viable solution, as it would also affect the action. To speed up the camera move without affecting the action, we had to roto out each actor roto is discussed in Chapter 4 and re-project on cards in 3D space see Chapter 3 , then do the same for the actual environment.

This scene had to be cut out into separate pieces and then rebuilt. This example shows the cleaned-up tennis court. The nets and the background were added back. The CG prison was added to the footage to extend the environment. Because some of the tennis players came into frame as the crane pulled back, they could not be used with the faster virtual camera, so production had to separately shoot new tennis players on green that we then projected on cards in the scene.

This was a very difficult undertaking, all for the sake of speeding up a camera move, and although it does not represent your average fix-it shot, it does give an idea of the vast potential of VFX as a post-production fixing tool. Screen Inserts We are surrounded by screens in our everyday life—phones, tablets, computer monitors, TVs—and it is hard to find a contemporary movie or TV show that does not involve at least some of those. This of course provides the most realistic results for example, light and reflection interactions with the actors and environment —but there are quite a few disadvantages when going the practical way.

On one movie, we shot a scene that took place in a control room that had an entire wall of monitors. The original production plan was to do it all practically, and the monitors were connected to an elaborate video feed system.

But the timing of the feed could not be properly synced to the action, and after several unsuccessful takes the director decided to go the VFX route.

The feed was switched to uniform green, and we later added all the screen material in post. An initial decision to do it as VFX rather than practical would have no doubt saved the cost of setting up and operating an expensive video feed system. The advantages of VFX screen inserts are clear: It is therefore no wonder that many films rely heavily on VFX for all the screen inserts, and this is indeed one of the most ubiquitous types of VFX work.

Screen inserts are generally not a complicated type of visual effect, especially since screens have no depth and do not usually require 3D camera tracking. It is also important to note that unlike the old TV sets of yore, which typically cast a strong light and glow on the surrounding and had a distinctive curved glass screen, modern TVs, computer monitors and phone screens are flat and produce much less ambient light and glow.

They are thus easier to integrate. The difficulty of screen inserts is usually determined by the number of reflective surfaces around them because that requires adding the reflections for proper integration , and the complexity of separating the elements that go in front of the screen see Chapter 4 for extraction and roto basics.

Rig Removal and Period Cleanup This category is in fact similar to fix-it removal and cleanup, but the important difference is that shots in this category are well planned in advance and are part of the original VFX breakdown and budget. Unlike fix-it scenarios, the elements that need to be removed can and should be identified and discussed on location during pre-production with the relevant department heads camera, art, grips, lighting, stunts. That way, proper measures can be taken to minimize the extent and difficulty of the removal.

For example, period movies and TV series usually require the removal or modification of non-period elements such as satellite dishes, AC units, modern signs, lights, and cars to name just a few. The complexity and cost of the VFX removal can be reduced by practically dressing at least some parts usually the areas that have the most interaction with the actors, like the ground and immediate surroundings.

Wires and rigs are used for safety on a variety of stunts, and obviously need to be cleaned up in post. The amount and complexity of such VFX work depends on how the rigs are set up. As always, careful pre-production planning and tight cooperation between the VFX supervisor and other heads of departments can help reduce costs on VFX removal dramatically this will be discussed in detail in Chapters 7 and 8. We took care of the upper levels, where we removed AC units, satellite dishes, modern lights, and graffiti circled in red , while also enhancing the period look with additional fire escapes and hanging laundry.

Set Extensions This category covers everything from adding a small distant element in the background to completely replacing the entire surroundings. Set extensions go an incredible way in helping the filmmakers achieve their dreams and visual aspirations without draining their budget. They are therefore used extensively— from low budget TV programs to tentpole blockbusters. Set extensions are created using matte paintings techniques see Chapter 5 or built in 3D, but, either way, they truly open vast opportunities for the filmmaker.

Successful set extensions and CG environments are most effective when they are well integrated with the real footage. Even the most spectacular or out of this world matte painting will feel like a cheap backdrop if it is not seamlessly tied to the foreground elements. VFX artists use the original footage as their guide—looking for cues about time of day, lighting, and atmosphere. The more extras you have on set, the more people you need to pay, feed, transport, dress, and manage.

There are two main VFX techniques to populate a shot with additional people: The CG technique on the other hand uses a large number of virtual CG characters, which are often animated and controlled with the help of special crowd simulation software see Chapter 5.

This technique is by far more expensive and involving, but it offers much more freedom in terms of camera movement and a lot more flexibility in designing the action and of course does not require additional shooting time or a second unit. The first non-CG option is more commonly used on medium and low budget films as it requires much less resources, but its success depends on careful planning and on-set practices, which will be discussed alongside some examples , in Chapter 8.

A rather empty beach springs to life with crowd tiling. In this shot from Boardwalk Empire the camera panned from the beach to the boardwalk, where we also added people. Action Elements Gun shooting, explosions, bullet hits, debris, blood, gore—all can be done practically on set as special effects and have been done this way since the early days of cinema.

But filmmakers often choose to rely on visual effects for some or even all of the action elements. Practical effects may be hampered by safety regulations, cost, and practical limitations. Others require a long time to reset, time that is not always available on tight shooting schedules.

Sometimes visual effects are used to replace a malfunctioning practical effect or to augment one. Muzzle flashes, for example, are so quick that they often happen in between frames, just when the shutter is closed.

Blood squibs sometimes fail, or are obstructed by quick movement. Advanced VFX Once we move beyond these common basic categories, a vast ocean of possibilities is revealed.

Advanced visual effects usually call for the creation and integration of CG elements—which can be anything from a small CG creature in the background to an entire army of charging Orcs, a little water splash or an epic flood, a helicopter crashing or an entire city being destroyed by an earthquake. Almost all of the more advanced VFX shots also include work from one or more of the categories listed earlier.

They become a compound and multi-faceted process that usually requires the work and expertise of several VFX artists. Beyond the basic work already described, there are so many different types of shots, so many options, and so many levels of complexity—it is practically impossible to list them here in some kind of orderly fashion, or to tuck them neatly into predefined categories.

But the following chapters will provide insight and examples that will help deepen your understanding of the VFX process. This should enable you to consider every idea, every option and every type of potential VFX shot with a clear and practical understanding of what it takes to achieve it and how. That third dimension— depth—is lost the moment the image is captured. It is therefore logical to assume that VFX work itself should be done in 2D, just like Photoshop image manipulation but on a series of sequenced images rather than a single still.

This assumption is correct to a certain extent. Many VFX shots are worked on and completed fully within the 2D realm. On any given movie, you can expect a substantial chunk of 2D-only shots. Yet visual effects can, and often must be created in a virtual three-dimensional world.

How exactly can this be done? And why only on certain shots and not all of them? The decision whether to use a 2D or 3D approach or some combination thereof is influenced by many factors.

Clearly, any animated object or character that moves around in the shot needs to be three-dimensional, unless it is small and far in the distance. The cartoonish 2D look of traditional animation does not pair well with live footage. But at the most fundamental level, the choice between 2D and 3D is, first and foremost, dictated by the camera movement. To better understand this relationship, we need to examine camera movement from the VFX point of view. How well does the move tell the story?

How does it fit in the dramatic flow? What is its emotional effect on the viewer? Next come the practical decisions about how to physically move the camera in space: Hand held or on a Steadicam rig? The goal, after all, is to make the added VFX elements feel like they are native to the footage. They should move in frame exactly like the elements in the original footage. That relative movement and spatial relationship between objects in the footage can be narrowed down to two key factors: As the train is traveling, nearby objects appear to move across the window faster than objects further away.

The trees right next to the tracks whoosh by quickly while the mountains in the background move very slowly. It is so embedded in our perception of movement and depth that we hardly pay attention to it in real life, but the lack of parallax is immediately apparent to even the most uninitiated eye.

If the train window is a camera lens, then the train movement is essentially a track move. One thing to note here is that the effect of parallax diminishes with distance. Two trees standing 3 and 20 feet away from a moving lens will parallax strongly, but if those two trees were standing half a mile away, with the same foot distance between them, they would move practically in unison, with no noticeable parallax between them.

This is a key point for VFX, as will soon become clear. So far we have looked at track, dolly, and crane camera moves. But what about pan and tilt? Do rotational moves also generate parallax? Do rotational moves also generate parallax? To be precise, only a perfect nodal pan is totally parallax-free.

If you rotate the lens around its exact nodal point, objects will not shift in relation to each other, no matter how close or far they are. This is because there is zero travel on any axis, only a prefect rotation. Specialized nodal heads do exist, but they are mostly used for shooting panoramas and spherical images The effect of parallax in a short sideways track camera move: Notice the minimal shift in the foreground, and the complete lack of parallax shift in the mid-ground and background.

If the camera were mounted on a nodal head, even the foreground would be parallax free. The rotation axis on most film and video camera rigs is usually around the center of the camera body, farther away from the nodal point, so in most cases a tilt or pan are not truly nodal, and some minor parallax does happen. But as I mentioned before, parallax is reduced with distance, so that minor parallax on a pan or tilt is really only noticeable on objects that are very close to the lens.

In most cases, tilts and pans can be considered parallax-free. Perspective Shift In VFX terminology, a perspective shift means that the camera reveals different parts or areas of an object as the camera moves. Just like parallax, perspective shift is more pronounced in the foreground than the background. A camera that travels along a street will reveal the sides of nearby buildings as it travels, but distant buildings will still show only their front facades.

Building a full set of the iconic Hancock Manor and the garden surrounding it was not a possibility, due to location and budget limitations. It became clear that this would be a VFX set extension, but to avoid using a green screen and to retain the physical interaction between the actors and the environment, it was decided that production would build a practical set that included only the gate, the walkway and stairs, and the front door and portico.

All the rest would be added as a VFX set extension. In this case, there is absolutely no reason to do anything in 3D, and we can very easily work the shot completely in 2D. As long as we make sure that the elements we combine match the foreground set and actors in terms of angle, perspective, and lighting, things should work out well.

We can, for example, introduce some movement in the trees, or some distant flying birds, simply by shooting relevant elements. One of the major advantages of the 2D workflow is that you can use real photographs and footage without having to create everything from scratch.

The feeling of depth, distance, and dimension in the 2D workflow is achieved in the same way as in traditional painting—by using perspective, size, and color. Objects become smaller with distance, far away objects have less contrast and saturation and take on a slight bluish tint, and proper perspective is achieved by adhering to plotted converging lines—all tried-and-tested art techniques that have been honed and perfected through centuries.

We can effectively trick the eye into seeing distance and dimension without ever creating a single element in 3D. So far so good. With a locked camera, the entire set extension could be done completely in 2D, without any need to build or render elements in 3D. To realistically convey depth and motion, every single object in the image, every leaf on every tree, and every blade of grass, needs to move separately, based on its distance from the camera.

But how can we do that? However, if our background is made of separate layers of photographic elements, we can possibly move them across each other. But how can we tell how far from the camera an object in the footage really is, and how do we know how much it needs to move in relation to other objects and the actors? And there is yet another problem: How will our 2D elements withstand perspective shift?

Our 2D solution, which worked so well when the camera was static, is now turning out to be a dead end. It becomes evident that we must switch to a 3D solution, but where do we start? The start and end frames of the crane up move. Notice the extensive parallax between foreground and mid-ground, as well as the perspective shift in the foreground. To accurately represent the correct parallax and perspective shift generated by a camera move, we must first recreate this move with absolute precision.

In other words, we need to generate a virtual camera that duplicates exactly not only the position, angle, and movement of the original camera but also its lens type and distortion characteristics. Without the ability to track the camera, visual effects would be truly limited, and the range of possibilities severely diminished.

Camera tracking also called matchmove—see Chapter 5 for detailed explanation of the tracking process not only provides us with a virtual camera duplicate, but also with a virtual space.

It gives us an accurate indication of where objects in the footage actually are, as well as their relative distance from the camera and from other objects in the scene. For instance, in our Hancock Mansion shot, a camera track will give us the relative position of the front gate, the stairs and the back door.

This in turn will help us figure out the spatial relationship between objects in the footage and the virtual VFX elements—where the front wall and trees should be, the exact position of the house and how far the background elements need to be in this virtual space. A tracked camera replicates the move in a 3D environment and provides indication for the position of key elements in the footage.

Working in 3D also solves the problem of perspective shift—objects are built as truly three-dimensional elements and therefore can be viewed from any angle.

Moreover, unlike the photos and footage used in the 2D workflow, 3D elements can be lit from any direction and in any fashion. And of course, animated objects and characters can move and behave in a truly three-dimensional fashion, inside a three-dimensional world. The 3D workflow is indeed tremendously powerful and flexible—but it comes with a price, literally. Just like in the real world, every 3D object needs to be built and textured, and then the entire scene needs to be lit and rendered.

It only takes a good photo to make a tree in 2D, but recreating the complex structure of branches, twigs, and leaves in 3D is a whole different thing. In Chapter 5 we will take a closer look at modeling, texturing, shading, rigging, animating, and rendering, but in a nutshell, the 3D workflow requires more artists per shot, more processing power, and more time than 2D. Building a car from scratch is much harder than taking a photo of one.

Achieving a photoreal look with 3D is also a considerable challenge. The table that follows shows a quick summary of the pros and cons of 2D and 3D workflows. Enters the 2. We have already concluded that because of the camera movement it cannot be done in 2D, so we must construct it in 3D. If we want to mimic reality as close as possible, we need to create every tree leaf and every blade of grass as separate 3D entities. When you think of the lawn extending around and behind the house or the trees in the far background, it is obvious that there is no way this can be done—such a scene, with billions of objects, is totally impractical to model and render.

It is also a gigantic overkill—we cannot really see that much detail in the distance anyway. So is there really a need to build the entire scene in 3D? What if there was a way to use the 2D workflow within a 3D scene? Such a technique is called 2. The idea is simple: Then each layer is projected on a flat card or a very simple model that is placed in three-dimensional space, at the correct distance from the camera.

When the scene is rendered through the virtual camera, there will be accurate parallax between the different layers since they are in fact placed at different distances from the lens. Notice that the cards are not yet arranged in the correct order—the debris element should be just in front of the plane, and the electricity pole should go in the back.

Obviously, this technique is not suitable for animated characters or foreground objects, especially if there is a strong perspective shift.

But it works wonderfully well for backgrounds and distant objects, and even foreground elements that do not have much depth in them a wall, for example. Rendering is extremely fast because there are no detailed 3D objects and no CG lighting, only images on cards or simple geometry , and you get the benefit of being able to use real footage or photos, and avoid having to build and light everything from scratch.

Move the screen away from the camera, and the image will become smaller. Bring it closer and the image will grow bigger. Move it sideways and the image moves too. The projection method therefore allows the artists to create a 2D matte painting in a 2D software such as Photoshop, and then transfer it to a 3D environment.

The artist can move around the projection geometry without affecting the projected imagery, which enables proper set up of the scene for accurate parallax and depth without destroying the original look of the 2D matte painting. The 2. Matte painters see Chapter 5 harness this technique to create extremely detailed and believable surroundings that have depth and realistic parallax without ever getting into any 3D work.

The results can often be more convincing than a full 3D build, because what is lost in terms of fully accurate parallax and perspective shift is gained by the use of photographic material incidentally, this technique is also prevalent in video games. There is no reason to stick rigidly to one workflow when in fact a combination can give the best results with the least amount of resources and time.

The strong perspective shift necessitates 3D in the foreground, so the front wall and fence are built in 3D. However, this does not mean that the models need to be very elaborate, because a lot of detail can be achieved with proper textures see Chapter 5 for further discussion of texturing and shading. It also makes sense to create the big foreground tree as a full 3D element, because this will generate some fine parallax between tree branches and leaves, which will boost the believability of the shot.

In an ideal world, the grass on the lawn or at least the foreground part of it would be created in 3D, but this will require a very dense model. We can get away with a 2. The main element in the mid-ground is the house itself. Other elements such as bushes and nearby structures can be done as 2. The model of the house is very simple. Most of the detail will come from the textures. The importance here is to build the protruding parts like the balcony and the sunken areas like the windows in 3D, because this is where parallax and perspective shift will be most noticeable.

More texture detail is added to the house while the adjacent structure is also built with simple geometry. All the distant elements can be simple 2. There is really no need to add any 3D detail, as long as the projections cards are placed at the correct distance from the camera based on the camera tracking information.

Lawn grass, bushes, distance trees and other elements are added as 2. Finally, the large foreground tree is added as a full 3D model. This allowed us to achieve subtle but necessary parallax within the tree itself, and to add some breeze animation to the leaves. They are absolutely necessary in order to ensure proper usage of resources, time, and money. In an ideal world, doing everything in 3D could be great assuming all the 3D work, from modeling to lighting, is top notch, of course.

But in the realities of filmmaking, a pragmatic approach that takes into consideration practical limitations often leads to better end results. This means that the element most likely needs to be inserted behind some parts of the footage unless the element is in front of everything else.

For example, adding a sign to a storefront at the opposite side of a busy street means that all the people and cars that pass in front of the sign need to be separated and then put back on top of the newly added sign. This is a constant challenge, because, as discussed in the previous chapter, the footage itself is two-dimensional and there is no way to separate objects within it based on their distance from the camera 3D camera tracking can solve the relative position of objects, but cannot accurately trace their outline.

Separation is not just required for depth sorting. In many instances, a specific area in the footage needs to be treated or modified. This area needs to be isolated, and any parts in the footage that stand or cross in front of it need to be separated too, otherwise they will become affected by the treatment.

For instance, if you want to change the color of a car that passes behind some trees, you need to isolate the A simple cleanup of a sign on the bus requires separating the elements that are in front of the bus: As you can tell, the task of separation is an essential and fundamental process in VFX.

It may therefore come as a surprise that there really are only two methods of doing it: Both methods, to be perfectly honest, are not quite hi-tech elegance, and both require a considerable amount of work on the part of the VFX team and in the case of green screens, also on the part of the film crew. Rotoscoping Rotoscoping existed long before computers. In the Disney animation film Sleeping Beauty, for example, real actors were filmed first, then the animators traced their contour and applied these resulting moving shapes to the animated characters as a sort of pre-digital motion capture technique see Chapter 5.

Technically speaking, modern rotoscoping roto is very similar: It is essentially the same as drawing a line around the subject with a pencil, but the difference is that the dots can be animated over time, and thus the roto shape can accurately follow the movements and deformations of the subject. Partial roto for the kid.

Notice that a separate roto shape is used for the head and the upper torso. Roto for complex subjects like people is usually broken down into many pieces, which helps the process of animating the roto and makes it more efficient. Evidently, roto is a laborious and time-consuming process: But in the absence of a green screen, or when a green screen is not practically possible, roto is the only option for separating a subject from the background.

In small amounts, roto is an indispensable part of the basic compositing workflow. But shots that require massive amounts of roto can become expensive and time-consuming, and should be avoided unless a green screen option is absolutely not feasible.

Roto works well for subjects with hard, well-defined edges, but is not as successful for thin wispy parts like hair and fur, or when edges are very fuzzy for example, when the subject is out of focus or when it has motion blur. The lion was real held on a leash by a trainer , and so were the office workers. For obvious reasons, they were shot separately: A green screen was not practical in this setting, so we had to roto out the lion in order to put the office workers behind it and remove the trainer and the leash.

The body of the lion could be easily roto-ed. It is quite impossible to roto out thin, semi-transparent strands of hair. The lion from The Wolf of Wall Street had to be cut out so that the office workers could be inserted behind it. The outer parts of the mane were recreated, because this type of detail is too thin and wispy to be extracted with roto.

Basically, we created an artificial mane that looked close enough to the original one, but did not require a frame-by-frame roto. A green screen see later enables a much better separation of fine, semi-transparent detail, and provides a consistent rather than busy background.

It is therefore important to consider the type of subject before deciding on whether to place a green screen or resort to roto. A head covered with a hat will be much easier to roto out than one with long frizzy hair. At this point you might wonder why, with all that cutting-edge technology invested in visual effects, people are still painstakingly moving points frame by frame to trace the shape of an actor.

Unlike those clunky face recognition algorithms found in many consumer cameras, VFX separation requires extremely accurate tracing of the outline of the subject. We distinguish between objects by recognition and association. The computer can only make a distinction based on numerical values. Color, for example, is something that the computer can evaluate much more accurately than humans.

But in any regular setting, the actors and the background behind them have too many similar colors to allow a computer-assisted, color-based separation—unless, of course, the background is made of one consistent color that does not appear in the foreground. And this, in fact, leads us to the second separation method. Green Screen Arguably no other subject in visual effects is discussed ad nauseam like green screens.

That said, they are indeed a vital tool that merits detailed discussion because it is one of those areas in VFX that relies equally on the successful performance of the film crew and the VFX team.

Many of those problems can be minimized, or avoided altogether, by properly setting up and lighting the screen and the subject on set. The practical issues of physically setting up the green screen color consistency, coverage, shadows, spill, tracking markers, lighting, etc. Here, I would like to offer some insight into common issues of green screen extraction—problems that seem to prevail in post even when the screen was perfectly placed and lit on set.

The use of blue screens has declined in recent years. There are several reasons for this, these being the main ones. A blue screen is obviously still needed when green color is present in the scene vegetation, for example , and is still sometimes preferable to green in low-light scenarios because it bounces less light back.

Generally, though, green screens work better as they are easier to light and easier to extract. Still, the discussion of green screens in this chapter and in Chapter 8 applies equally to blue screens—the principles are the same. The Challenges of Extraction The idea of a green screen is simple: Highly saturated green is far removed from human skin tones and does not frequently appear on clothes and most objects, so it is a good choice as a background color.

A perfectly uniform and well-lit green screen is obviously important for achieving a clean extraction, but is not a guarantee for a successful shot. Background Matching The number one reason for unsuccessful green screen compositing is not the quality of the green screen, the difficulties of extraction, or the skills of the compositor; it is the choice of background that replaces the green screen.

Many filmmakers assume that a green screen gives them the freedom to place any kind of background behind their subject, but this assumption is wrong.

It is very difficult to marry successfully a background and a foreground that are very different in terms of lighting, perspective, luminance values, or hues. It is therefore necessary to think in advance about the background and the environment that the subject will eventually be in, and light both the subject and green screen with that environment in mind.

Shoot indoors instead, or use black overhead screens to reduce the intensity of the natural sky light. Likewise, if you plan to place the subject in a bright, sunny environment and you must shoot on a stage or under a cloudy sky, light the subject with a strong warm key and bluish fill.

Once the shot has been captured, it is equally imperative not to try to force a mismatched background behind the subject, because it is very hard, often impossible, to change the lighting on 2D footage. A green screen shot from a short movie that I used for my compositing course at fxphd. There is too much of a difference in contrast, brightness, and hue between the bright, hazy, and warm foreground and the rather dark and cool background.

Notice also how the background feels too sharp and in-focus compared to the foreground. Some color and focus adjustments improve the shot quite a bit, making everything feel more integrated and natural. In post, we had to deal with shots captured by two different cameras: Both types of shots were done in a fairly bright, consistent ambient lighting environment because of the extensive use of green screens in essence a day-for-night scenario.

When we composited the shots, we darkened and graded the foreground footage. This worked very well with the GoPro first-person POV footage, because we could really take down the brightness and saturation of the crane to match it to the NYC nighttime background. Brightening the actor would make him feel detached from the environment, brightening just his face would make him look weird, and brightening the environment instead would create a mismatch with the first person POV shots in the sequence.

We ended up doing a bit of everything to make the foreground and background work together, but it was certainly a difficult balancing act, not unlike walking on a crane at night.

Hanging from a crane over a NYC street. Spill Green spill or blue, in case of a blue screen is an unavoidable side effect, although it can certainly be minimized on set by following a few simple rules see Chapter 8.

Compositors have some excellent spill-suppression tools at their disposal, and these are used to kill areas of unwanted green hue on the subject without changing the overall color balance.

But if you are reviewing a VFX shot and the subject still feels a bit odd and unnatural, it might be that spill is still present look especially around shiny skin areas like the forehead, bright clothing, and wispy hair and fur.

Conversely, sometimes a heavy-handed spill suppression affects too much of the original color—overly red or magenta skin tones are usually a sign of excessive spill suppression. After extraction and with background added. Notice the green spill around the wispy edges of her hair, as well as on her white shirt. Sometimes, spill occurs even where the green screen is not visible. In this frame, the camera is looking at the opposite direction of the green screen, but the green is still reflected in the glass panels.

Spill suppression helps bring the color back to a neutral place. Non-solid Edges If the green screen is clean, consistent, and well lit, one can assume that the computer can accurately extract the subject. This assumption is right as long as the edges of the subject are sharp and solid. But this is hardly ever the case, as edges are often soft and semi-transparent.

In all these cases, the soft, semi-transparent edges are harder to extract, because they partially contain the green screen. This often results in dark or bright edges, or areas that seem unnaturally sharp and cut-out. Since these issues are often very subtle and hard to spot, the best way to detect them is to compare the VFX version with the original green screen footage—an easy thing to do on the Avid or any other editing tool.

Experienced compositors have a whole arsenal of methods and tricks to extract tough areas like hair, bring back soft motion-blurred edges that get cut off in the extraction process, or even extract wispy smoke or fog.

That said, sometimes extractions are overly processed in an attempt to fix bad edges. A good example is light wrap. Green screen shots can and should look natural and convincing. But if a shot feels a little off, look first at the relationship between the subject and the background. Is the lighting similar? Are the colors and luminosity matching? Does the focus make sense? If all this feels good, look at the edges, and compare This is a tricky extraction, because one actress is fully in focus while the other is completely out of focus.

This is especially apparent on the screen left shoulder of the distant actress, as well as all around the hair of the closer one. Additional compositing work was done to alleviate the issues of soft edges and mismatched luminosity, and the integration between foreground and background now feels less disjointed.

Is an out-of-focus object too sharp at the edges? Does a fast moving arm lack the motion blur trail? Are there any unnaturally dark or bright edges along the subject? Both green screen and roto are essential for separation but are far from ideal solutions. Eventually they will be replaced by better, more efficient technologies see Chapter The Future , but until then, roto and green screen remain a fundamental aspect of visual effects and require attention and diligence— when planning the shots, on set and during post.

The set builders are busy constructing, the camera crew is practicing the next move, the wardrobe people are sorting through the costumes, the gaffers are putting up a lighting rig, the hair and makeup people are adding the last touches on the actors, the stunts are fixing the safety rig.

But step into a VFX facility, and pretty much everyone you see is sitting in front of a computer screen. With everyone using just a mouse and a keyboard, it is hard to tell what each artist is doing, and in what way the work of one artist is different from that of the artist sitting right next to them. In fact, as I mentioned in Chapter 1, there are many points of similarity: This chapter will take you on a guided tour through a typical VFX facility, presenting and explaining each craft and its contribution to the overall process.

For the filmmaker, being familiar with the different crafts of VFX is beneficial every step of the way—from initial breakdowns and bidding, through the shooting stage, and into post-production. Previous chapters have already made clear that the number of artists working on a single shot may vary substantially.

While many VFX shots need only one compositor to complete, highly elaborate shots may require ten or even more artists performing a variety of tasks, from camera tracking and roto through animation and lighting to dynamic simulations and matte painting.

The internal flow of work in a VFX facility, and the way a single shot may pass from one artist to another in a certain order is usually referred to as pipeline. Admittedly, the term has a bit of an industrial tinge to it, but this is far from a conveyor belt scenario. Rather, it is a team collaboration, where different artists contribute to the final result by working in their own area of expertise, or craft.

The work of an animator is very different from that of a matte painter and requires a completely different skillset. The software tools they use are also different. As it is, all-round players in VFX are rare, but supervisors and lead artists, by nature of their job, usually have a broader understanding of the different crafts.

Previsualization previs and concept art take place long before that, often at the early stages of pre-production. They are unique to other VFX crafts in the sense that they precede all other work, and are used as planning and development tools rather than building blocks of the finished shots.

Previs Previsualization allows the filmmaker to design shots both in terms of camera movement and the actual action. It is a way of blocking out how various players interact and how they are framed. More often than not, the live actors and elements are depicted as well usually represented by animated dummies.

You can look at previs as animated 3D storyboarding, and the advantage is clear—the filmmakers can really play around with the camera as well as the position and movement of various players in 3D space. A previs for Boardwalk Empire.

Notice the rather rough look of the elements and the use of a piece of old photograph. As crude as it was, it helped visualize the boardwalk long before any VFX were actually done. In fact, some directors go to the extreme and previs their entire movie from start to finish. That said, previs on this scale is a luxury reserved for high-budget productions. Although previs artists use basic models and rough out the animation to show only the important aspects, previs is still a delicate and time-consuming process that requires skilled artists with a keen sense of camera movement and timing.

Also, investing in previsualization only makes sense if the filmmakers adhere strictly to the design of the shots. Going out on set and shooting something completely different from the previs is a total waste of money. On small budget films, it is wise to previs only those shots or sequences for which precise planning of layout and camera move is crucial, or shots that rely heavily on CG. Concept Art While the previs process focuses on position and timing, concept art is all about the look.

If, for example, your film features some sort of CG character in a prominent role, you will need to start the concept art process early enough in pre-production to allow ample time for look development. Later on, when the CG artists start modeling, texturing, and shading the character, they will need to have clear and detailed concept art references as guides to the look that was established and decided upon by the filmmakers and VFX team.

It is certainly not a good idea to start making drastic changes at the CG stage, because of the technical difficulties of modifying models, rigs, textures, and UVs. It is much faster, and much more efficient, to play and experiment with the look and design during the concept stage, since the concept artist can quickly draw or paint new versions.

Concept artists are usually exceptionally versatile and have a strong traditional art technique, though many work digitally, in Photoshop or even using 3D software. Concept art styles vary widely, from blocky sketches and mood boards to ultra-realistic detailed designs. But the principle is the same: A present-day action movie that requires mostly CG replicas of present-day vehicles, weapons, or environments, for example, can probably do just fine without spending on concept art—after all, these are known objects and reference material is abundant.

A period film where historical accuracy is important might benefit more from archival photos or paintings as reference sources.

But concept art is crucial for imaginary and fantastic elements that need to be conceived from scratch, and is therefore prevalent in sci-fi, fantasy, and superhero genres. Camera Tracking As discussed in the previous chapter, camera tracking also called matchmove or 3D tracking is a crucial step in the VFX chain.

Whether the shot is 3D-heavy or uses 2. No matter how good the VFX work on a shot is, a sloppy camera track will cause objects to slide and float in relation to the footage, which is an instant shot- killer. This way, work is efficiently done only on the necessary areas. Camera tracking is essentially a reverse-engineering process. Lens distortion a natural occurrence, especially with wider lenses needs to be accounted for as well.

Usually the footage is analyzed for distortion, and then undistorted for the tracking process. Most of the CG work is done on the undistorted version, and then re-distorted in comp to bring everything back to the original look. Each tracker shows its motion path for that frame.

You might also like: WEBLOGIC SERVER TUTORIAL PDF

The roto around the actress is used to mask her out of the tracking process. For obvious reasons, only static objects can be used to track the camera move. When the camera movement is solved, a basic mesh can be created by connecting the main tracking points. This is purely technical work, and requires virtually no creative input from the filmmakers.

Layout Think of the layout stage as the virtual equivalent of blocking out a shot on set—figuring out the placement of actors and extras, rehearsing the camera move, timing the action. It is similar to previs, but is usually done at a later stage, after the footage has already been shot and a rough cut established.

When a VFX shot consists mostly of modifications to the plate or extensions to an existing set, a layout stage is not necessary. However, when the shot relies heavily on CG elements and animation or when a shot is created fully in CG, layout is crucial. These basic decisions on timing and position have a decisive effect on additional work like destruction, fire, and smoke simulations, as well as water interaction all described later in this chapter.

This example shows why it is important for the filmmaker to approve and lock the basic moves earlier on, and conversely, why it is not wise to change the layout decisions at a later stage. Modeling All 3D models are made of the same basic elements: Combine enough polys short for polygons and you can create any surface imaginable. Naturally, smooth surfaces need more polys a cube is just 6 polys, but a sphere requires at least to be smooth , and the more detailed and complex an object is, the more polys it requires.

As I mentioned in Chapter 1, unlike computer games, where real-time performance mandates a constant awareness of poly counts and lots of trickery to keep it low , in visual effects the modeler has much more leeway with the amount of detail and complexity, because the rendering is never done in real time.

That said, keeping poly counts reasonably low is still a consideration, as it makes scenes more manageable and faster to render. The two main categories of modeling are technical modeling and organic modeling—roughly the equivalents of construction and sculpting in real life. This clearly shows how the number of polygons grows exponentially with more detail. Technical Modeling Technical models include cars, airplanes, ships, buildings, spaceships, furniture, robots, machines, weapons, and basically anything man-made or alien-made that requires precision construction techniques.

Technical modeling often referred to as hard-surface modeling is commonly based on blueprints, diagrams, or reference photos, and the model is usually constructed with many smaller parts, just like in real life. The exception, of course, is that only parts that will actually be visible to the camera are built.

Organic Modeling Organic modeling is very similar to sculpting, and is therefore better suited for continuous-surface models of humans, animals, and creatures of all kind. It is less about technical precision and more about a thorough knowledge of anatomy and physical proportions.

The best organic modelers often have traditional sculpting skills, and must also have a good understanding of how characters move in order to create muscles and joints that will bend and bulge realistically when rigged and animated.

A highly detailed organic model by David Eschrich. Such level of detail does not come cheap in terms of polygons—this model contains over 5 million of them. But there are ways like displacement, see elsewhere to reduce the polygon count and still retain much of the small detail. Most of the detail will come from the textures.

After Effects Tutorials

The importance here is to build the protruding parts like the balcony and the sunken areas like the windows in 3D, because this is where parallax and perspective shift will be most noticeable. More texture detail is added to the house while the adjacent structure is also built with simple geometry. All the distant elements can be simple 2. There is really no need to add any 3D detail, as long as the projections cards are placed at the correct distance from the camera based on the camera tracking information.

Lawn grass, bushes, distance trees and other elements are added as 2. Finally, the large foreground tree is added as a full 3D model. This allowed us to achieve subtle but necessary parallax within the tree itself, and to add some breeze animation to the leaves. They are absolutely necessary in order to ensure proper usage of resources, time, and money. In an ideal world, doing everything in 3D could be great assuming all the 3D work, from modeling to lighting, is top notch, of course.

But in the realities of filmmaking, a pragmatic approach that takes into consideration practical limitations often leads to better end results. This means that the element most likely needs to be inserted behind some parts of the footage unless the element is in front of everything else. For example, adding a sign to a storefront at the opposite side of a busy street means that all the people and cars that pass in front of the sign need to be separated and then put back on top of the newly added sign.

This is a constant challenge, because, as discussed in the previous chapter, the footage itself is two-dimensional and there is no way to separate objects within it based on their distance from the camera 3D camera tracking can solve the relative position of objects, but cannot accurately trace their outline. Separation is not just required for depth sorting. In many instances, a specific area in the footage needs to be treated or modified.

This area needs to be isolated, and any parts in the footage that stand or cross in front of it need to be separated too, otherwise they will become affected by the treatment. For instance, if you want to change the color of a car that passes behind some trees, you need to isolate the A simple cleanup of a sign on the bus requires separating the elements that are in front of the bus: As you can tell, the task of separation is an essential and fundamental process in VFX.

It may therefore come as a surprise that there really are only two methods of doing it: Both methods, to be perfectly honest, are not quite hi-tech elegance, and both require a considerable amount of work on the part of the VFX team and in the case of green screens, also on the part of the film crew. Rotoscoping Rotoscoping existed long before computers. In the Disney animation film Sleeping Beauty, for example, real actors were filmed first, then the animators traced their contour and applied these resulting moving shapes to the animated characters as a sort of pre-digital motion capture technique see Chapter 5.

Technically speaking, modern rotoscoping roto is very similar: It is essentially the same as drawing a line around the subject with a pencil, but the difference is that the dots can be animated over time, and thus the roto shape can accurately follow the movements and deformations of the subject. Partial roto for the kid.

Notice that a separate roto shape is used for the head and the upper torso. Roto for complex subjects like people is usually broken down into many pieces, which helps the process of animating the roto and makes it more efficient. Evidently, roto is a laborious and time-consuming process: But in the absence of a green screen, or when a green screen is not practically possible, roto is the only option for separating a subject from the background.

In small amounts, roto is an indispensable part of the basic compositing workflow. But shots that require massive amounts of roto can become expensive and time-consuming, and should be avoided unless a green screen option is absolutely not feasible. Roto works well for subjects with hard, well-defined edges, but is not as successful for thin wispy parts like hair and fur, or when edges are very fuzzy for example, when the subject is out of focus or when it has motion blur. The lion was real held on a leash by a trainer , and so were the office workers.

For obvious reasons, they were shot separately: A green screen was not practical in this setting, so we had to roto out the lion in order to put the office workers behind it and remove the trainer and the leash.

The body of the lion could be easily roto-ed. It is quite impossible to roto out thin, semi-transparent strands of hair. The lion from The Wolf of Wall Street had to be cut out so that the office workers could be inserted behind it. The outer parts of the mane were recreated, because this type of detail is too thin and wispy to be extracted with roto.

Basically, we created an artificial mane that looked close enough to the original one, but did not require a frame-by-frame roto. A green screen see later enables a much better separation of fine, semi-transparent detail, and provides a consistent rather than busy background. It is therefore important to consider the type of subject before deciding on whether to place a green screen or resort to roto. A head covered with a hat will be much easier to roto out than one with long frizzy hair.

At this point you might wonder why, with all that cutting-edge technology invested in visual effects, people are still painstakingly moving points frame by frame to trace the shape of an actor. Unlike those clunky face recognition algorithms found in many consumer cameras, VFX separation requires extremely accurate tracing of the outline of the subject. We distinguish between objects by recognition and association. The computer can only make a distinction based on numerical values.

Color, for example, is something that the computer can evaluate much more accurately than humans. But in any regular setting, the actors and the background behind them have too many similar colors to allow a computer-assisted, color-based separation—unless, of course, the background is made of one consistent color that does not appear in the foreground.

And this, in fact, leads us to the second separation method. Green Screen Arguably no other subject in visual effects is discussed ad nauseam like green screens. That said, they are indeed a vital tool that merits detailed discussion because it is one of those areas in VFX that relies equally on the successful performance of the film crew and the VFX team.

Many of those problems can be minimized, or avoided altogether, by properly setting up and lighting the screen and the subject on set.

The practical issues of physically setting up the green screen color consistency, coverage, shadows, spill, tracking markers, lighting, etc. Here, I would like to offer some insight into common issues of green screen extraction—problems that seem to prevail in post even when the screen was perfectly placed and lit on set.

The use of blue screens has declined in recent years. There are several reasons for this, these being the main ones. A blue screen is obviously still needed when green color is present in the scene vegetation, for example , and is still sometimes preferable to green in low-light scenarios because it bounces less light back. Generally, though, green screens work better as they are easier to light and easier to extract. Still, the discussion of green screens in this chapter and in Chapter 8 applies equally to blue screens—the principles are the same.

The Challenges of Extraction The idea of a green screen is simple: Highly saturated green is far removed from human skin tones and does not frequently appear on clothes and most objects, so it is a good choice as a background color.

A perfectly uniform and well-lit green screen is obviously important for achieving a clean extraction, but is not a guarantee for a successful shot. Background Matching The number one reason for unsuccessful green screen compositing is not the quality of the green screen, the difficulties of extraction, or the skills of the compositor; it is the choice of background that replaces the green screen.

Many filmmakers assume that a green screen gives them the freedom to place any kind of background behind their subject, but this assumption is wrong. It is very difficult to marry successfully a background and a foreground that are very different in terms of lighting, perspective, luminance values, or hues. It is therefore necessary to think in advance about the background and the environment that the subject will eventually be in, and light both the subject and green screen with that environment in mind.

Shoot indoors instead, or use black overhead screens to reduce the intensity of the natural sky light. Likewise, if you plan to place the subject in a bright, sunny environment and you must shoot on a stage or under a cloudy sky, light the subject with a strong warm key and bluish fill.

Once the shot has been captured, it is equally imperative not to try to force a mismatched background behind the subject, because it is very hard, often impossible, to change the lighting on 2D footage. A green screen shot from a short movie that I used for my compositing course at fxphd. There is too much of a difference in contrast, brightness, and hue between the bright, hazy, and warm foreground and the rather dark and cool background.

Notice also how the background feels too sharp and in-focus compared to the foreground. Some color and focus adjustments improve the shot quite a bit, making everything feel more integrated and natural. In post, we had to deal with shots captured by two different cameras: Both types of shots were done in a fairly bright, consistent ambient lighting environment because of the extensive use of green screens in essence a day-for-night scenario.

When we composited the shots, we darkened and graded the foreground footage. This worked very well with the GoPro first-person POV footage, because we could really take down the brightness and saturation of the crane to match it to the NYC nighttime background.

Brightening the actor would make him feel detached from the environment, brightening just his face would make him look weird, and brightening the environment instead would create a mismatch with the first person POV shots in the sequence.

We ended up doing a bit of everything to make the foreground and background work together, but it was certainly a difficult balancing act, not unlike walking on a crane at night. Hanging from a crane over a NYC street. Spill Green spill or blue, in case of a blue screen is an unavoidable side effect, although it can certainly be minimized on set by following a few simple rules see Chapter 8. Compositors have some excellent spill-suppression tools at their disposal, and these are used to kill areas of unwanted green hue on the subject without changing the overall color balance.

But if you are reviewing a VFX shot and the subject still feels a bit odd and unnatural, it might be that spill is still present look especially around shiny skin areas like the forehead, bright clothing, and wispy hair and fur. Conversely, sometimes a heavy-handed spill suppression affects too much of the original color—overly red or magenta skin tones are usually a sign of excessive spill suppression.

After extraction and with background added. Notice the green spill around the wispy edges of her hair, as well as on her white shirt. Sometimes, spill occurs even where the green screen is not visible. In this frame, the camera is looking at the opposite direction of the green screen, but the green is still reflected in the glass panels.

Spill suppression helps bring the color back to a neutral place. Non-solid Edges If the green screen is clean, consistent, and well lit, one can assume that the computer can accurately extract the subject. This assumption is right as long as the edges of the subject are sharp and solid. But this is hardly ever the case, as edges are often soft and semi-transparent.

In all these cases, the soft, semi-transparent edges are harder to extract, because they partially contain the green screen. This often results in dark or bright edges, or areas that seem unnaturally sharp and cut-out. Since these issues are often very subtle and hard to spot, the best way to detect them is to compare the VFX version with the original green screen footage—an easy thing to do on the Avid or any other editing tool.

Experienced compositors have a whole arsenal of methods and tricks to extract tough areas like hair, bring back soft motion-blurred edges that get cut off in the extraction process, or even extract wispy smoke or fog.

That said, sometimes extractions are overly processed in an attempt to fix bad edges. A good example is light wrap. Green screen shots can and should look natural and convincing. But if a shot feels a little off, look first at the relationship between the subject and the background.

Is the lighting similar? Are the colors and luminosity matching? Does the focus make sense? If all this feels good, look at the edges, and compare This is a tricky extraction, because one actress is fully in focus while the other is completely out of focus.

This is especially apparent on the screen left shoulder of the distant actress, as well as all around the hair of the closer one. Additional compositing work was done to alleviate the issues of soft edges and mismatched luminosity, and the integration between foreground and background now feels less disjointed. Is an out-of-focus object too sharp at the edges?

Does a fast moving arm lack the motion blur trail? Are there any unnaturally dark or bright edges along the subject? Both green screen and roto are essential for separation but are far from ideal solutions. Eventually they will be replaced by better, more efficient technologies see Chapter The Future , but until then, roto and green screen remain a fundamental aspect of visual effects and require attention and diligence— when planning the shots, on set and during post.

The set builders are busy constructing, the camera crew is practicing the next move, the wardrobe people are sorting through the costumes, the gaffers are putting up a lighting rig, the hair and makeup people are adding the last touches on the actors, the stunts are fixing the safety rig.

But step into a VFX facility, and pretty much everyone you see is sitting in front of a computer screen. With everyone using just a mouse and a keyboard, it is hard to tell what each artist is doing, and in what way the work of one artist is different from that of the artist sitting right next to them. In fact, as I mentioned in Chapter 1, there are many points of similarity: This chapter will take you on a guided tour through a typical VFX facility, presenting and explaining each craft and its contribution to the overall process.

For the filmmaker, being familiar with the different crafts of VFX is beneficial every step of the way—from initial breakdowns and bidding, through the shooting stage, and into post-production. Previous chapters have already made clear that the number of artists working on a single shot may vary substantially. While many VFX shots need only one compositor to complete, highly elaborate shots may require ten or even more artists performing a variety of tasks, from camera tracking and roto through animation and lighting to dynamic simulations and matte painting.

The internal flow of work in a VFX facility, and the way a single shot may pass from one artist to another in a certain order is usually referred to as pipeline. Admittedly, the term has a bit of an industrial tinge to it, but this is far from a conveyor belt scenario.

Rather, it is a team collaboration, where different artists contribute to the final result by working in their own area of expertise, or craft. The work of an animator is very different from that of a matte painter and requires a completely different skillset. The software tools they use are also different.

As it is, all-round players in VFX are rare, but supervisors and lead artists, by nature of their job, usually have a broader understanding of the different crafts. Previsualization previs and concept art take place long before that, often at the early stages of pre-production. They are unique to other VFX crafts in the sense that they precede all other work, and are used as planning and development tools rather than building blocks of the finished shots.

Previs Previsualization allows the filmmaker to design shots both in terms of camera movement and the actual action. It is a way of blocking out how various players interact and how they are framed. More often than not, the live actors and elements are depicted as well usually represented by animated dummies. You can look at previs as animated 3D storyboarding, and the advantage is clear—the filmmakers can really play around with the camera as well as the position and movement of various players in 3D space.

A previs for Boardwalk Empire. Notice the rather rough look of the elements and the use of a piece of old photograph. As crude as it was, it helped visualize the boardwalk long before any VFX were actually done. In fact, some directors go to the extreme and previs their entire movie from start to finish.

That said, previs on this scale is a luxury reserved for high-budget productions. Although previs artists use basic models and rough out the animation to show only the important aspects, previs is still a delicate and time-consuming process that requires skilled artists with a keen sense of camera movement and timing.

Also, investing in previsualization only makes sense if the filmmakers adhere strictly to the design of the shots. Going out on set and shooting something completely different from the previs is a total waste of money. On small budget films, it is wise to previs only those shots or sequences for which precise planning of layout and camera move is crucial, or shots that rely heavily on CG.

Concept Art While the previs process focuses on position and timing, concept art is all about the look. If, for example, your film features some sort of CG character in a prominent role, you will need to start the concept art process early enough in pre-production to allow ample time for look development.

Later on, when the CG artists start modeling, texturing, and shading the character, they will need to have clear and detailed concept art references as guides to the look that was established and decided upon by the filmmakers and VFX team.

It is certainly not a good idea to start making drastic changes at the CG stage, because of the technical difficulties of modifying models, rigs, textures, and UVs.

It is much faster, and much more efficient, to play and experiment with the look and design during the concept stage, since the concept artist can quickly draw or paint new versions. Concept artists are usually exceptionally versatile and have a strong traditional art technique, though many work digitally, in Photoshop or even using 3D software.

Concept art styles vary widely, from blocky sketches and mood boards to ultra-realistic detailed designs. But the principle is the same: A present-day action movie that requires mostly CG replicas of present-day vehicles, weapons, or environments, for example, can probably do just fine without spending on concept art—after all, these are known objects and reference material is abundant.

A period film where historical accuracy is important might benefit more from archival photos or paintings as reference sources. But concept art is crucial for imaginary and fantastic elements that need to be conceived from scratch, and is therefore prevalent in sci-fi, fantasy, and superhero genres.

Camera Tracking As discussed in the previous chapter, camera tracking also called matchmove or 3D tracking is a crucial step in the VFX chain. Whether the shot is 3D-heavy or uses 2. No matter how good the VFX work on a shot is, a sloppy camera track will cause objects to slide and float in relation to the footage, which is an instant shot- killer.

This way, work is efficiently done only on the necessary areas. Camera tracking is essentially a reverse-engineering process. Lens distortion a natural occurrence, especially with wider lenses needs to be accounted for as well. Usually the footage is analyzed for distortion, and then undistorted for the tracking process. Most of the CG work is done on the undistorted version, and then re-distorted in comp to bring everything back to the original look. Each tracker shows its motion path for that frame.

The roto around the actress is used to mask her out of the tracking process. For obvious reasons, only static objects can be used to track the camera move. When the camera movement is solved, a basic mesh can be created by connecting the main tracking points.

This is purely technical work, and requires virtually no creative input from the filmmakers. Layout Think of the layout stage as the virtual equivalent of blocking out a shot on set—figuring out the placement of actors and extras, rehearsing the camera move, timing the action. It is similar to previs, but is usually done at a later stage, after the footage has already been shot and a rough cut established. When a VFX shot consists mostly of modifications to the plate or extensions to an existing set, a layout stage is not necessary.

However, when the shot relies heavily on CG elements and animation or when a shot is created fully in CG, layout is crucial. These basic decisions on timing and position have a decisive effect on additional work like destruction, fire, and smoke simulations, as well as water interaction all described later in this chapter.

This example shows why it is important for the filmmaker to approve and lock the basic moves earlier on, and conversely, why it is not wise to change the layout decisions at a later stage. Modeling All 3D models are made of the same basic elements: Combine enough polys short for polygons and you can create any surface imaginable. Naturally, smooth surfaces need more polys a cube is just 6 polys, but a sphere requires at least to be smooth , and the more detailed and complex an object is, the more polys it requires.

As I mentioned in Chapter 1, unlike computer games, where real-time performance mandates a constant awareness of poly counts and lots of trickery to keep it low , in visual effects the modeler has much more leeway with the amount of detail and complexity, because the rendering is never done in real time.

That said, keeping poly counts reasonably low is still a consideration, as it makes scenes more manageable and faster to render. The two main categories of modeling are technical modeling and organic modeling—roughly the equivalents of construction and sculpting in real life. This clearly shows how the number of polygons grows exponentially with more detail.

Technical Modeling Technical models include cars, airplanes, ships, buildings, spaceships, furniture, robots, machines, weapons, and basically anything man-made or alien-made that requires precision construction techniques. Technical modeling often referred to as hard-surface modeling is commonly based on blueprints, diagrams, or reference photos, and the model is usually constructed with many smaller parts, just like in real life. The exception, of course, is that only parts that will actually be visible to the camera are built.

Organic Modeling Organic modeling is very similar to sculpting, and is therefore better suited for continuous-surface models of humans, animals, and creatures of all kind.

It is less about technical precision and more about a thorough knowledge of anatomy and physical proportions. The best organic modelers often have traditional sculpting skills, and must also have a good understanding of how characters move in order to create muscles and joints that will bend and bulge realistically when rigged and animated.

A highly detailed organic model by David Eschrich. Such level of detail does not come cheap in terms of polygons—this model contains over 5 million of them. But there are ways like displacement, see elsewhere to reduce the polygon count and still retain much of the small detail.

If you just need some standard cars in the background or some humans in the far distance, pre-made consumer models are a good alternative. Evidently, the higher-quality models are usually the most expensive, but even those could cost much less than paying a modeler to build a model from scratch.

You should however always consult with the VFX team—downloadd models might have faulty topology or bad UV maps which may require a substantial amount of cleanup and repair work. When working on the mini-series Sons of Liberty we had to create several different 18th-century ships.

Since this ship is well known and well documented, there are quite a few decent licensed CG models of it, and we found a good one at Turbosquid. Although it still needed a substantial amount of texturing and some modeling refinements, using a pre-built model saved us days of modeling work, and allowed us to spend more time building other ship models from scratch.

Yet for some modeling tasks it can be indispensable. It calls for nothing more than a decent stills camera, and is based on multiple photographs of the subject, taken from different angles. A special software then analyzes these photos and creates a 3D model replica of the original object. Since the photos include all the surface detail and colors, the 3D model comes fully textured.

Texturing and Shading Modeling a 3D object is half the work. Shading and texturing that model is the other half. A simple sphere with no texture could potentially be a hundred different things. It could be a basketball, an eyeball, a bubble, a cannonball, or a giant planet, to name just a few possibilities. Shaders are virtual materials. A shader describes the general properties of the material, or more specifically, how it reacts to light.

Is it shiny or dull? Transparent or opaque? Matte or reflective? Textures add the surface and color detail. Together, textures and shaders generate all the visual cues that turn the model into a believable real-world object.

Notice the small detail, dents and imperfections. Shaders Think of a shader as a master container for a single material. It defines all the properties of that material, and contains all the textures that are used to control and vary those properties. The most common shader properties are: Two additional shader properties are bump mapping and displacement. Displacement on the other hand actually changes the geometry of the model, and is a very effective way to add small detail without the need to physically model it.

Unlike regular bump maps, which use black and white values to describe height, normal maps use red, green, and blue to describe all three axes, essentially enabling three-dimensional features and concave displacement. Normal maps are good, for example, for creating realistic ocean surfaces or rock features.

They are used extensively in computer games to add detail to low-poly models. Some materials require additional though less common properties such as: A simple black and white texture 1 is applied to a sphere 2 , first as a bump map 3 , and then as a displacement map 4.

Clearly, the bump map version, which is just a shading trick-of-the-eye, fails around the edge, which looks totally smooth. On the other hand, displacement generates a much more realistic look since it actually changes the shape of the model. Shaders can also be used to create volumetric effects like smoke or fog, and even large-scale animated surfaces like a field of wind-blown grass. By changing just a few shader parameters like diffuse, specular, refraction and subsurface scattering, a wide variety of base materials can be created, such as iron 1 , gold 2 , chrome 3 plastic 4 , jade 5 , glossy paint 6 , wax 7 , glass 8 , and tinted glass 9.

These basic attributes can then be further enhanced with textures to add detail and break up uniformity. Even a shiny chrome ball has some spots of dust, smudges, or other small imperfections on the surface. Textures are therefore used not only for color detail, but also to control and vary other properties of the material such as shininess or transparency and of course bumps and displacement. Texturing is an intricate art, and great texture artists know how to create surfaces that feel naturally imperfect.

A brick wall, for example, is essentially a tiled, repetitious surface. Those subtle inconsistencies go a long way toward making a model feel believable and not synthetic, and texture artists achieve them by layering different images and hand-painting additional detail to create a rich, naturally detailed surface.

However, it is a step in the process that needs to be accounted for in terms of schedule and budget. UV mapping a complex model can take several days, as the model needs to be broken down and flattened into many small pieces, then those pieces laid out and organized. A full UV layout of a ship model.

Every surface needs to be flattened out and organized in this layout to be accurately textured. Notice the fine detail such as rust marks and discolorations. Rigging The movement of a CG character such as a human or animal is complex.

A simple human walk cycle, for example, involves the transformation and rotation of numerous joints, from the toes through the hips and spine to the arms and head. Since most organic models are built from a single continuous mesh, an underlying structure needs to be built in order to push and pull the muscles and skin. For example, the knee joint on a human can only rotate on one axis, about 70—80 degrees backwards from a straight leg to a folded position.

These restrictions are essential because limbs like legs and arms are rigged in a way that enables the animator to move the whole chain from the end point for example, move the entire leg from the heel or the arm from the hand , rather than rotate each joint separately. This is called IK, short for Inverse Kinematics. Without the proper restrictions, the joints will bend awkwardly in all directions, like a simple wire puppet.

When the rig is completed, it is attached to the model in a process called skinning. During this process, the rigger weighs each bone to adjust how much it pushes or pulls the vertices around it. Riggers often skin characters in a way that simulates bulging muscles as joints are flexed. Hard-surface technical rigging does not require an underlying bone structure or skinning process, but complex machines can have hundreds of moving parts, and the rigger must set up all the joints and restrictions in a way that allows the animator to control the object with efficiency.

The Robots in the Transformers franchise are a great example for highly elaborate hard-surface rigs that allow not only mind-boggling mechanical trickery but also humanoid motion. Like UV mapping, rigging is largely invisible to the filmmaker. It goes under the hood, is never rendered, and has no effect on the visuals. But it is nonetheless a crucial step in the animation process and directly affects the success of a shot.

Animation Of all the different crafts of VFX, animation is arguably the one most people are familiar with. We all know what animation is; it has been with us for more than a hundred years in the forms of traditional cell animation, stop motion animation, and, more recently, 3D animation.

Most VFX crafts are newcomers, evolving over the past thirty years or so. But animation, like filmmaking itself, is backed by a vast and venerable tradition of time-tested techniques and artistic conventions.

It is true that animating a CG character in 3D space is different in many ways from animating a hand-drawn 2D character. But the underlying essentials are exactly the same: But, as I noted in Chapter 1, there is a fundamental difference in the animation style between a fully animated movie such as Toy Story or Frozen and a live-action film. The marriage of real actors and CG characters necessitates a very strict adherence to realistic motion and the laws of physics, and leaves very little room for stylized movement.

If we look at the bigger picture for a moment, animation in VFX is not only about characters. In fact, anything that changes over time, anything that interpolates between one keyframe and another, is animated. Dynamic simulations and particle systems are examples of higher-level animations—complex, interactive events that happen over time, usually built around action rules and parameters.

But the highest level of animation is indeed character animation. Breathing life into a human, animal or any creature and doing it successfully within the restrictive limits of live action realism requires highly developed animation skills, and a deep understanding of anatomy, timing, emotion, and physical expression. Successful character animations start with good modeling and rigging—the animator will usually work with the rigger to adjust and improve the rig by testing various motions and scenarios.

Motion Capture The need for realistic, non-stylized character motion in live action VFX inevitably led to developing ways to record the movements of real-life actors and transfer them to CG characters. Modern mocap techniques were only developed in the nineties though, and the film Final Fantasy: The Spirits Within is generally considered a mocap milestone.

However, it is the character of Gollum in The Lord of the Rings: The Two Towers that really pushed mocap forward as a mainstay cinematic technique and made actor Andy Serkis the first mocap star ever.

There are several different techniques for motion capture, but the most common nowadays is the optical method: An array of special cameras is placed around the capture area. The mocap actor wears a dark tight suit like a diving suit , and multiple bright or self-illuminated markers are attached to it at specific Eight frames from a motion- captured cartwheel animation. As the actor moves, the camera array calculates the relative position of every single marker in 3D space.

That information is then transferred to a fully rigged CG character.