3D applications are highly effective in composing models and animation, capable of rendering startlingly realistic scenes. It would be great to export the output of your favorite 3D application into Adobe After Effects so you could combine it with other footage and use the tool's many functions to manipulate this data as video. But how should you go about it?

In this two-part series I tackle three aspects of this issue:

  • Exporting UV data from 3D applications for use in After Effects
  • Generating mattes in your favorite 3D application for use in After Effects
  • Taking advantage of the OpenEXR format using After Effects plug-ins from fnord

I tackle the first of these aspects, UV data, in Part 1. I address the other two aspects, mattes and OpenEXR plug-ins, in Part 2.

What is UV space?

UV space is a parametric definition of an object's mapping used in computer graphics. Its main goal is to provide a means of correlating the geometry of an object with other data generated within the program. This can include not only textures and rendering data, but also parameters of the object itself. For surfaces generated from curves, this would, for instance, be valid for the tension and continuity. Another common use is the placement of one object on another's surface.

Because of its being fully parametric, UV space always extends in ranges between 0 and 1 in both directions, where the start and end are either defined by the parameters of the object itself or based upon user definition. The goal is to spread out the UV coordinates as evenly as possible in order to cover areas with different geometry density with an equal number of pixels in the texture map and avoid distortions. This relation can be best visualized by using a numbered checkerboard image (see Figures 1 and 2) as reference.

This method is commonly referred to as UV mapping in 3D programs, though that terminology is not entirely correct. Even an object without an explicitly defined UV map will be assigned default UV coordinates so the renderer knows where each texture pixel is in relation to the object and the camera view.

For information on how to create UV maps and use them for 3D rendering, consult the manual of your software and visit respective user-to-user forums.

How can I transfer UV data for compositing in After Effects?

Since After Effects does not natively import external 3D objects and thus is only able to use rendered pixels, you need to have a way of transforming vector-based UV information into discrete pixel values. This is done by encoding the values into the red and green channels of an image. Figure 3 illustrates the relationship between the U and V directions and the color channels.

Depending on which 3D software you use, the image formats and rendering procedures are different. Some basic guidelines are provided in the workflow sections of this document:

How do I use UV images in After Effects?

After Effects itself is unable to use UV data natively in the way that would be most useful. You can visualize and extract the channel only from multichannel formats like RPF and RLA using the 3D Channel Extract effect, or import the image files. You have no native way of applying that data to distort other layers correctly in 3D space.

So, where does this leave you? Obviously, you need a little outside help to achieve your objective. This help is provided in the form of the RE:Map plug-in suite from Revision Effects. The plug-in that you need is called RE:Map UV. See Using RE:Map UV in this document for information on using RE:Map UV.

Outputting a UV pass from Maxon Cinema 4D (ver. 8.5 and later)

Maxon Cinema 4D by far offers the most convenient way of generating multipass output, especially when it comes to generating all of the passes in one go without multiple renders. The UV pass is implemented natively but has a few flaws which will need to be taken into account and rectified.

Preparing your objects

Luckily, most object types in Cinema 4D will always have a default UV parameterization without the user needing to do anything. This is particularly true for any geometric primitives, spline-based objects, and most standard polygonal modeling operations. The only thing required at all is for the object to have a dedicated material (see Figure 4). This material does not even need to have any textures; it just needs to be applied to the object.

Adding a render pass

Outputting the UV data is done by adding the respective pass to the Multi-Pass output on the Render Settings. To do that, select it from the Channels popup and it will be added to the list (see Figure 5). There are no specific options to adjust.

Now you only need to select a suitable bit depth and image format with at least 16 bits per channel (bpc) (see Figure 6). This can be PSD, TIF, or OpenEXR. After that you are basically set to render.

The transparency problem

One issue that you may encounter is Cinema 4D's handling of transparencies. Unfortunately, it will always introduce the transparencies also to the extended passes which, technically, isn't entirely incorrect but will produce unusable results regardless. As evidenced in Figure 7, the film material uses transparency—and when you switch the Picture Manager to show the Material UVW (see Figure 8), you immediately see how it also uses the transparency.

It may seem unimportant at first, but look at Figure 9. When you apply a texture using RE:Map UV in After Effects, the plug-in is unable to discern areas behind the filmstrip correctly, which will result in the filmstrip looking like it is intersecting itself.

This is even more critical, as other passes such as the object buffers are affected by this incorrect behavior as well, and you thus cannot rely on any trickery to correct it. The simplest way to circumvent this shortcoming is to turn off all transparency rendering in the render settings (see Figure 10) and then render out the pass again. You just need to take care not to overwrite your existing correct shading passes, so turn them off or remove them from the multipass list.

After rendering, you end up with a more suitable UV pass. The downside to this is that you now completely lose any information on the overlapping areas (see Figure 11). There are ways of fixing this as well using more convoluted techniques, but that's a topic for another time.

Outputting a UV pass from NewTek Lightwave 3D (ver. 9 and later)

Since NewTek Lightwave 3D does not support outputting a UV pass natively, things can get quite complicated to get it working the way you want. Generating such a pass can be done only at the shader level by creating custom materials and rendering an RGB pass—and is further convoluted by the lack of acceptable support for higher-bit-depth image formats and multipass rendering.

Basic UV mapping procedures

All elementary UV functionality can be found in the classic material editor, ever since version 6. For illustration purposes, you will map the template grid to the luminosity channel. In the Surface Editor, click the T button near the Luminosity property (see Figure 12). This opens the Texture Editor.

Whenever you add a new texture to a previously untextured channel, its type will be set to a fully opaque one-colored gradient as a safety precaution. This will completely overwrite all other info on the channel. In order to apply an image texture, you need to switch the layer to the Image Map type (see Figure 13).

As a next step, you need to set the texture projection to UV (see Figure 14) as opposed to the default Planar. After that, you only need to select the correct UV set (see Figure 15). Figure 16 shows a test rendering pass.

Building the UV colors

To generate a usable pass and retain the UV vertex map data, we will make use of the node-based shading system that is built into Lightwave in addition to the classical material system. To access it, go to the Surface Editor and click the Edit Nodes button (see Figure 17). The check box next to it will enable actual render output for the nodes and needs to be enabled, too.

The setup itself is very simple. The material is made "flat" by dialing down all other shading effects to 0% and setting Luminosity to 100%. The shading network uses two Color Layer nodes and a Mixer node, and connects the Mixer node to the Color output of the material for the representation of the UV data (see Figure 18). Since you are working with separate Green and Red inputs, the Mixer node needs to be set to Additive (see Figure 19) to give the correct result.

Building the UV information is very straightforward. All you are doing is using some gradients that will be tinted accordingly. Using the Layer Color nodes may seem odd at first, as they are mostly meant for legacy compatibility with older procedural texture types, but I have found it to be the easiest and least confusing way.

Due to the way the whole shebang needs to be evaluated, you always will have to use some image texture somewhere or define the UV space with other methods. Just throwing together two gradient nodes won't work. In the nodes view, double-click the Layer Color node to open it up. It will look like the normal texture editor. The bottommost layer is a grayscale gradient image, which you can generate in your favorite graphics program. Just make sure you generate it at least at 16 bpc and save it in a suitable format. Apply it similar to the original UV texture by setting texture projection and UV map accordingly (see Figure 20).

On top of the gradient image, you'll employ a remapping gradient set to Previous Layer (see Figure 21). This is similar to using the Tint effect in After Effects.

To get a correct result, it is imperative for the gradient to be interpolated in a linear fashion (see Figure 22). By default, the Smoothing will be set to Spline, which is visually more pleasing because it prevents the midtones from being too weak—but it will skew the output and, in this case, that is not desirable.

During all your work, you will notice that the little material preview spheres will stay black. This is a limitation of using the Color Layer node and might be fixed at some point. For the time being, you always need to do a test render (see Figure 23) to check the result.

Rendering

Since you are rendering to a beauty pass/color pass instead of a dedicated extra pass, you need to take a few additional steps to get the expected output. First and most importantly, you need to turn off any and all forms of dithering. There are two steps of dithering in Lightwave: one globally applied to the overall shading for saving to low-bit-depth formats, and a second if, in addition, you employ the classical form of antialiasing and motion blur calculations. Both need to be turned off to get proper results. While you might think it would be more logically placed on the Render Globals panel, it is hidden on the Processing tab of the Compositing Window (see Figure 24). Simply set the default Normal method to Off.

The second level can be accessed in two places. Historically, it can be found on the Camera Panel (see Figure 25) but there are also redundant controls on the Render Globals (see Figure 26) because of users' wishes for simplified control. Basically, you simply need to check whether Antialiasing and Motion Blur are set to None and Off, respectively, when using the classical motion blur or an antialiasing method such as PLD.

Choosing a suitable image format

As I pointed out earlier, getting decent results out of Lightwave is a complicated matter, and the small selection of suitable image formats is a strong contributing factor. In theory, the program saves a ton of formats; too bad it saves most of them only at 8 bpc. Hence, it is necessary to elaborate on suitable ones separately.

You can save rendered images from two places: directly from the Render Globals' Output tab (see Figure 27) or from the Compositing Window by means of dedicated buffer savers. For the first method, you are pretty much limited to using SGI 64 and HDR images if you need 16 bpc or higher output. There are TIF and PSD savers there as well, but don't use them! They either suffer from the limitation I mentioned or have bugs that will produce files that cannot be read in programs other than Lightwave itself.

Instead, use the Compositing Window. Buffer savers are added as image filters in the respective slot. In addition to the normal Buffer Saver, the Photoshop PSD Export is located there. Enabling the 16-bit component output option (see Figure 28) will give correct 16 bpc PSD files that can be used for our purposes.

In addition to the various free attempts, such as ASA Buffer Saver, to come up with better buffer savers, there is a commercial product that you might want to consider. It's called EXRTrader (see Figure 29). It will give you a decent multichannel OpenEXR saver with full control of the various compressions and per-channel bit depths. In addition, when you download the demo version and install it, standard OpenEXR image loaders and savers will be installed that can be used free of charge, adding to your output capabilities (see Figure 30).

Outputting a UV pass from Luxology modo 301

Luxology modo 301 is able to output UV data natively. However, a few additional steps are required to make it show up on render.

Preparing your objects

In order to generate UV data in the render, your objects need to have a proper set of UVs. Luxology modo will generate these automatically for geometric primitives such as a cube or cylinder. In addition, it will take care to add any required geometry to the UV set during successive modeling operations such as bevels and extrusions.

For this to function properly, it is imperative that the UV map you intend to use is active in the Vertex Map List panel. By default, it will simply be called Texture (see Figure 31). If the UV set is not selected, operations such as bevels and extrusions will not be represented by it and, thus, parts of your object will have no UVs attached. In order to fix this, or if you do not choose to create a map from the start, you can always do so at a later point by using tools such as the Create UV Tool or the Unwrap Tool. For ways to tweak and optimize UV maps and which tools to use, consult the manual or online help.

Assigning materials and textures

Each object that is supposed to show up in the UV pass must have a proper material and texture assigned to it. The texture is especially important. Valid textures need to be based on image maps for the evaluation to be successful. To assign them, select the material of the object in the Shadertree. Using the Add Layer popup menu (see Figure 32), you can add an Image Map (see Figure 33). This will be inserted on top of the object's initial material.

When the new layer is in place, select it to set the texture mapping method. If the object has a UV set, modo will always try to automatically use this as the default mapping type. When for some reason you need to use a different UV set or have created it at a later point after assigning and tweaking the material, you can do so manually. Simply set the Projection Type to UV Map (see Figure 34) and, in the next popup menu (see Figure 35), select the map with the correct name.

Adding a render output

In its default state, modo will only render the RGB data and an alpha channel. All other outputs must be selected explicitly and added individually. In the Shadertree, again select the Add Layer popup menu (see Figure 36) and add a Render Output. It will default to Final Color Output.

To switch it to UV data, select it and right-click with your mouse. From the menu, select UV Coordinates (see Figure 37).

Two very important settings to get the correct results are the color clamping and dithering options. When you add a new render output, these will always default to be active. Just like the Gamma settings, this is to make sure your representation on the screen matches the rendered file as closely as possible. However, for many extended outputs, this is not desirable because it will clip the value ranges and produce incorrect results later in compositing. Therefore, you need to turn it off for the UV pass (see Figure 38).

Luxology modo allows you to render to multilayered files or separate image sequences per pass. The latter method has the advantage that you can selectively re-render everything in case you need to make changes that only affect one of the passes—and if that pass happens to be only the UV pass in an otherwise complete scenario, you can save massive amounts of rendering time by bypassing ambient occlusion, reflections, and other render-intensive effects.

If you want to use this function, you have to specify an output path for each pass (see Figure 39). Select the output in the Shadertree and, in its properties under the Output Filename, select Browse to specify a filename and directory. In the Save Image File dialog box that comes up, make sure to select OpenEXR, PSD, or 16-bit TIF as the image format (see Figure 40) to retain all the information that you need.

Rendering

When rendering, it may be beneficial to turn off antialiasing—in particular when you plan to use RGB mattes to isolate parts. In modo this is done by setting the Antialiasing to 1 Sample/Pixel (see Figure 41).

To initiate the actual rendering process, you choose Render > Render Animation in the menu. You will be presented with a series of dialog boxes to define the parameters for render range and outputs. First, you need to define a start and end frame (see Figure 42).

The next dialog box will ask you whether you want to use the dedicated per-pass outputs you specified earlier (see Figure 43). When you choose Yes, they will be used and the rendering starts; if you choose No, the next panel is prompted.

During and after rendering, you can monitor progress and check channels in the Render View (see Figure 46).

Using RE:Map UV

After Effects itself is unable to natively use UV data in the way that would be most useful. You can only visualize and extract the channel from multichannel formats like RPF and RLA using the 3D Channel Extract effect, or import the image files. You have no native way of applying that data to distort other layers correctly in 3D space.

So, where does this leave you? Obviously, you need a little outside help to achieve your objective. This help is provided in the form of the RE:Map plug-in suite from Revision Effects. The plug-in that you need is called RE:Map UV.

Things you should and shouldn't do

In order to work with the plug-in, you should consider several things. The first and most important rule is to work at least in 16 bpc all the way. Your project must be set to 16 bpc or 32 bpc to get correct results. Why?

If you compare Figures 47 and 48, you will notice that at 8 bpc your texture will look blocky or as if cut into small stripes. This is called quantization and happens because your color channel contains only 255 discrete values that need to be spread out across an area that is much bigger. This will cover adjacent regions with the same pixel values, a behavior that can also be observed as banding when, for example, creating gradients at 8 bpc. When you switch to 16 bpc, there will be more than 65,000 values in between, making the output result smoother.

Obviously, just switching to 16 bpc won't do any good if your footage is only 8 bpc. Therefore you have to take care to select an image format in your 3D program that can actually hold all pixel information at 16 bpc or full float precision. These days pretty much all 3D programs are able to output OpenEXR files, so I recommend using this format. It can contain all the necessary information and color fidelity and has small file sizes. If that is not an option, choose an alternate file format. The 16 bpc bit depth is supported in PSD, TIFF, SGI, and IFF; 32 bpc is supported in PSD, TIFF, IFF, and HDR (Radiance).

Another thing worth noting is how extended passes relate to any color adjustments you make. Usually, you will want to work with the footage in a project that does not affect its color interpretation. This means you may need to match the 3D program's Gamma settings and turn off all color management functions. Otherwise, After Effects may clamp or skew the red and green channel, resulting in an incorrect presentation of the mapping topology.

The second most important rule while working with the plug-in is to never use transparencies. In Figures 49 and 50, a filmstrip was rendered in Cinema 4D. To simulate the transparencies in filmstock, the strip was given a transparent material. Because the transparency is also applied to the UV pass, but without compensating for the overlap in some areas, the result is useless for compositing—as you can see in the encircled area. An extra render pass without transparencies was required to get a more acceptable result.

In addition to transparent materials, this warning also extends to any other operation that will introduce transparent areas in your image. This includes motion blur and depth-of-field effects as well as any post-processing inside your 3D program. As a last potential pitfall you should take a look at how your program handles antialiasing and dithering. Some programs will use multiple passes or random scattering to determine geometry antialiasing (around your object contours), which will also influence the result of the UV pass and introduce transparencies due to the way the values are averaged. It could also lead to noticable flickering, as for each frame there might be a different result. In such cases you should turn off the respective settings of the 3D program.

Where does my UV pass go?

In general, there is no ultimate rule set for what is right and what is wrong in multipass compositing. Basically, it will always come down to layering all separate effects together in such a fashion that they match the rendered RGB output from the 3D program—but with the desired tweaks and modifications added to them, of course. This requires you to use blending modes and, therefore, a certain stacking order makes sense; but there is no textbook rule on how they all go together. Different programs will generate different output, requiring different handling procedures. Take another look at the filmstrip example from Cinema 4D and I'll show you how it all goes together.

In order to keep things simple, only use the passes that are really necessary. This means that you rely on your initial RGB and alpha image generated correctly by your rendering application and assume it represents the lighting of the scene as you set it up. In a more elaborate environment, you would even split up this pass into its separate shading components for diffuse, ambient, and specular shading to gain more control.

As you can see in Figure 51, your RGB image does not contain any film frames on the strip, so you have to insert a texture. Here our UV pass comes into play. By combining the original UV output with a matte to isolate only the relevant parts, you can add another image or subcomposition on top of the existing pixels with correct perspective distortion according to the rest of the scene.

One thing you need to consider is that film stock has certain tranparent and reflective properties that make viewers recognize it as such. You can easily handle the transparencies by playing with opacity and alpha channel adjustments, but what about reflections? Luckily for you, you can save those as a separate pass and superimpose them onto the rest of the imagery. For this you can use the Screen blending mode to retain distinct bright areas.

Now that you have an idea where everything goes and how you can achieve the look you want, it's time to take a look at the plug-in itself and its controls. Start by applying a new texture to a simple cube on a plane rendered in Luxology modo.

Standard setup using the plug-in

Whenever you want to work with RE:Map UV, you need at least two layers in your composition: the UV pass and the layer acting as the replacement texture, marked with red outlines in Figure 52. Since the result would be harder to visualize without it, I have added a third layer based on the ambient occlusion pass.

The effect is added to the UV pass image directly without any additional steps required (see Figure 53).

Once you have applied the plug-in, all you need to do is to select the texture layer in the reference combo box (see Figure 54) and you are good to go.

Now, let's recap those basic steps:

  1. Set your project to 16 bpc or 32 bpc.
  2. Import the images or image sequences.
  3. Add the footage to the composition.
  4. Select the UV pass.
  5. Choose Effect > RE:Vision Plugins > RE:Map UV from the menu.
  6. Select the texture in the plug-in interface.

That was easy. With that knowledge in mind, it's time to now move on to more complex scenarios.

Plug-in parameters

If you have ever worked with the Motion Tile effect, many of the plug-in's controls will look familiar, but of course they have been extended to cater for the specific requirements associated with manipulating UV-based imagery. They are also arranged in a different order, so take a quick look at the layout so you can find all the knobs and levers.

The top section provides a combo box for selecting the texture. In addition, you have another flyout where you can switch to different modes. More on that later. For the time being, the default Render mode is exactly what you need.

Next are the controls that allow you to crop and tile your footage. The Crop...TexMap controls are similar to Motion Tile's Output Width and Output Height, with the main difference being that the crop is using the corner points for reference and based on absolute pixels rather than percentages. The Tile Repeat Pattern X and Y take care of actually repeating your texture. Since the placement within the UV set is merely dependent on its parameterization, there are no extra controls for Tile Height and Tile Width as there are in Motion Tile. Repeating a pattern four times within the confines of the predefined range will automatically scale it to 25% of the original size. As a comfort feature, the plug-in has controls to switch the repeats to an alternating pattern. This can be done separately for both directions using the Tile Repeat Pattern options (see Figure 55). While it may not make much sense for this film strip example, it has its use when you are working with more abstract, pattern-like footage.

Because UVs are tied to the geometry, it can happen that the texture placement looks correct from one side, but as the camera swings around your object, the image is reversed. Technically, this is correct, as the texture is pinned to the object; but it may not be desirable if the texture contains text that must be readable or a logo artwork that would be inappropriate to present in reverse. To remedy this, you can use the Flip options.

To position your texture or make it move across the imaginary surface presented in the UV pass, you can use the Position Offset controls. These mimic the Tile Center found in the Motion Tile effect.

Understanding mipmapping

A problem common to all texturing in 3D space is the quality of the textures or, more to the point, how they are treated by the renderer and thereafter perceived by the viewer. This is similar to how, for example, thin lines or text are perceived on interlaced footage. While on an abstract technical level, they are 100% correct, it regularly happens that their sharpness is found disturbing and of poor quality. In such cases, small amounts of blur are often applied that make the whole matter more pleasing to the human eye, but on the technical level represent a mistreatment.

In the 3D world, for the most part it is simply a matter of two different principles colliding. On one side you have the infinite precision of the geometry that is defined by the coordinates of the points, polygons, edges, and curves; on the other side you have the image texture with its "baked" pixels and thus limited in resolution.

You can verify this on your own by using 3D layers in After Effects. When you zoom in on a 3D layer that is an image or a structure effect like Fractal Noise, you will see less detail and more softening because one pixel on the layer now occupies more than one pixel on the screen. The other way around, a large texture will look too sharp and blocky if you zoom out and the neighboring pixels are not combined into new color values. To minimize those visual artifacts, filtering is employed.

There are many different methods, ranging from simple linear filtering to complex multilevel anisotropic filtering, each with different results. Some of them work better when zooming in because they retain sharpness; others work better when zooming out, acting more like a blur. Most 3D programs use a compromise between the two, a method known as mipmapping. This is an adaptive method that will figure in the proximity of a texture pixel to the camera and adjust the filtering based on that.

In RE:Map UV, this method is implemented as well. When you look at the comparative image montage in Figure 56, you can clearly see the effect of mipmapping. The left cube does not use any filtering. As a result, the texture values of the underlying UV pass are not smoothed. Not only does the image look über-sharp and aliased, but there are noticable seams where non-contiguous UV values meet and there is stairstepping in straight lines in the texture. With filtering on, as evidenced in the image to the right, these disturbances are less noticeable because neighboring pixels are smoothed out.

Turning on the filtering is done with a simple check box. Then the second, more important control (the Mip Amount %) becomes active (see Figure 57). After that, it is up to you to find the right values. But why should you even have to worry about it? Of course there is a reason why the control is in place.

First and foremost, it is required because the plug-in has no way of knowing how far from the camera your pixels actually were when you rendered the image, and how they relate to the overall perception of the scene. Second, it could well be that the footage you intend to use has a different physical resolution than the one you used as a placeholder for rendering. Again, different sizes mean different amounts of filtering, and you need a control to compensate to bring everything in line with the rest of the scene.

Mipmapping is based on a cubic formula; therefore the actual amount will result in a curved falloff rather than linear. In practice that means that even small values will have a visible impact. A good comparison is a Box Blur with three or four iterations that nicely mimics this behavior; even small values will noticeably blur your image, and the further you go up, the stronger the effect will be, but not in a linear fashion. Therefore, you need to be careful when adjusting the values.

Analyzing your footage using the modes

By default, the plug-in will be switched in the Render mode, which is what you need most of the time (see Figure 58). However, for purposes of debugging and finding potential problem areas, there are three other modes.

Show Texture Map will reveal the image used for the texture—without any of the extended processing for tiling, offset, and mipmapping (see Figure 59). This can be a quick way of checking whether everything is right in a complex composition, where going back to the texture layer might involve long scrolling and panning around in the timeline. It can also help you detect any inconsistencies in the rendering of the base layer that might require precomposing it.

The next two modes are where it gets really interesting. Using Show UV with Edges (see Figure 60), you get to see the original UV pass in red and green tones, but with additional information presented in blue. These are the edges that the plug-in determines based on the threshold settings in the lower part of the interface. An edge in a UV pass is a break in continuity. This can either be the area where the UV data is undefined (empty) because there is no geometry, or areas where UVs in the back are covered up by others in the front. This can happen, for example, on curvy surfaces, waving flags, or similar geometry. In such cases, the plug-in needs to know where the parts are behind each other.

The last mode, Check UVs (see Figure 61), will also mostly show you edges, but the ones that will not be used by the plug-in. As I explained earlier, the UV pass itself should not be antialiased or otherwise softened in any way. Since that is not always possible to influence by the user, it may sometimes get blurry in some areas and then the plug-in will simply discard those areas, again based on the edge threshold you define. This may not always be a bad thing, though, because many times you will have textures that won't cover the whole surface anyway.

Where to go from here

Now that you've seen how to prepare UV data for use in After Effects, check out Part 2 where I show you how to work with mattes and channels in order to isolate different UV areas or gain extended control over shading by separating visible items in your image based on their pixel area.

Requirements

Prerequisite knowledge

To benefit from this article, you should have a basic understanding of where UV data is used in your 3D program and how to generate and manipulate it if necessary. You should have a firm knowledge of After Effects and understand the principles of pre-composing and ordering effects to achieve your desired result.

User level

Intermediate