Fighting Sequence Previs: Week 04

This week I discussed the current storyboard with Jay and Layne, and we are a bit slow on it. But we have clarified the basic issues. The main issues are the rhythm of the fight and the continuity of the shot.

This is our first version of the storyboard. It can be seen that the content of many shots is not clear enough, and there are still some controversies about the ending and some shots changes.

Environment:

I talked with evanna about the production details and style of the buildings. We still hope that we can refer to the style of Overwatch and make some buildings with clear shapes and simple structures.

This is a building I made.

To put it in a more straightforward way, it is to add geometric depressions or protrusions in the building and add some pipes. It is also noted that all geometrical turns have rounded corners.

This is the new scene made by evanna:

Houdini: week05

Node:

Bend:This node lets you define a capture region around (or partially intersecting) a model, and apply deformations to the geometry inside the capture region.

Volumetric light:

When a light-shielding object is illuminated by a light source, the radioactivity of the light present around it leaks, which is called volume light. The lighting under this special effect gives people a visual sense of space compared to the lighting in previous games. Can provide more realistic and characteristic visual effects.

Volumetric light environment material:

Render:

Smoke&Flame&Explosion

Node:

Volume Rasterize Attributes: The Volume Rasterize Attributes SOP takes a cloud of points as input and creates VDBs for its float or vector attributes.

In Houdini, naming is very important. There are several names in making firework explosion special effects: Density, Temperature, Fuel, etc., which control different variables respectively.

Distance field:Creates a signed distance field (SDF). An SDF stores the distance to the surface in each voxel. (If the voxel is inside, the distance is negative.) After the distance field is calculated, the closest distance between any point in the field and all reference objects can be obtained.

Density field:Creates a density field. Voxels in the band on the surface store 1 and voxels outside store 0. The size of the filled band is controlled with the Interior Band Voxels. Turn on the Fill interior to create a solid VDB from an airtight surface instead of a narrow band.

Smokesolver

The smoke solver provides the basics of smoke simulation. If you just want to generate smoke, the smoke solver is useful since it is simpler and expert users can build their own extensions on it. However, the Pyro Solver is more flexible.

smokesolver_sparse

The Smoke Solver is able to perform the basic steps required for a smoke simulation. Pyro Solver (Sparse) extends the functionality of this solver by adding flame simulation along with extra shaping controls.

Pyrosolver_sparse

This node is an extension of the Smoke Solver (Sparse). It considers an extra simulation field (flame, which captures the presence of flames) and adds some extra shaping parameters to allow for more control over the emergent look.

Density:Smoke density

Temperature:It controls the speed of smoke diffusion, the hotter the faster

The nodes of flame
The nodes of explosion

These are my final effects.

Houdini: week04

Linear workflow

The linear workflow means that when the software calculates, all the data involved in the calculation are based on linearity. Almost all image processing software we use is based on linear calculations. That is, 1+1=2, 2*2=4, 128+128=256. The correct premise of this process is that all the materials, conditions, and lights before the calculation are linear data.

sRGB (standard Red Green Blue) is a color language protocol jointly developed by Microsoft imaging giants. Microsoft, together with Epson, HP, etc., provides a standard method to define colors, allowing various computer peripherals such as display, printing, and scanning to interact with Application software has a common language for color.

Gamma correction

It is to edit the gamma curve of the image to perform non-linear tone editing on the image to detect the dark part and the light part in the image signal, and increase the ratio of the two, thereby improving the image contrast effect. In the field of computer graphics, the conversion relationship curve between the output voltage of the screen and the corresponding brightness is used to be called the Gamma Curve.

In terms of the characteristics of the traditional CRT (Cathode Ray Tube) screen, the curve is usually a power function, Y=(X+e)γ, where Y is the brightness, X is the output voltage, e is the compensation coefficient, and the power The value (γ) is the gamma value. Changing the size of the power value (γ) can change the Gamma curve of the CRT. The typical Gamma value is 0.45, which makes the brightness of the CRT image appear linear. For display screens such as TV sets using CRTs, since the luminous grayscale of the input signal is not a linear function but an exponential function, it must be corrected.

ACES

Academy Color Coding System is an open color management and interchange system developed by the American Academy of Motion Picture Arts and Sciences (AMPAS) and industry partners.

the depth of color

In the field of computer graphics, color depth represents the number of bits used to store 1 pixel of color in a bitmap or video frame buffer. It is also called bit/pixel (bpp). The higher the color depth, the more colors are available.

The color depth is described by “n-bit color”. If the color depth is n bits, there are 2n color choices, and the number of bits used to store each pixel is n.

Light type

Mantra&Arnold

After the learning experience in this lesson, I think that in the rendering speed of simple scenes, the speed of Arnold is much higher than that of mantra, but in terms of experience, especially the processing of textures by Arnold, the experience is not very good, repeated rendering and rendering is not good The problem of displaying textures recurred during my operation. Of course, I spent a lot of time on a UV problem. Houdini can’t create plane UV automatically. I remembered: UV is important to Texture

Some people on the Internet provided a comparison of the two renderers in terms of caustics. Mantra is significantly better than Arnold in this respect. In general, the two renderers have their own advantages and disadvantages, but I tend to use Arnold and hope that Htoa can be updated.

Mantra Arnold

Here are some of my renderings:

Arnold

Mantra

Motion blur:

Acting Feedback

This is the clip I choosed:

It can be seen that there is no very exaggerated performance in the film, which is also a big difference between the live-action film and the animation. In a live-action movie, the actor and the director create this role together. There are many uncontrollable factors, such as micro-expressions or detailed actions. But in animation, all elements in the screen are controlled by creators, so as an animator, we should consider the information conveyed by all content.

Exaggerated poses and motion ranges are kind of a common technique in animation. From this perspective, I divided the entire performance into three parts.

  • Passionate speech

We are going to win this war, because we have the best man!

In this line, the most important thing is the change in the character’s mood, from the impassioned ness at the beginning to the daze. I added the gesture of waving and raising the head. In order to better guide the audience’s sight, the character always performs with only one hand. I chose the word “best” as the emotional turning point. Although the voice changes in the word “man”, the general needs to spot this thin soldier in advance. The emotional change will take some time to transition.

  • Daze

After the general’s mood changed, there was a period of sluggishness, and in this part, I decided to deepen his disappointment. No dialogue means I can put more performance on the body and expression of the character. So I asked the thin soldier to give the general an embarrassed smile. The soldier tried to relieve the embarrassment, but it made the general’s disappointment worse. Here the general made a more popular disappointment, that is, he wiped his face and exaggerated his body. Language can bring exaggerated expressions.

  • Cheer up

And because they are going to get better, much better.

The general cheered himself up again after a disappointment. From disappointment to cheer up, there was a process of raising his breath. The general who had been bent over stood up again here, but no longer looked up, lowered his head to speak, and waved his hands, as if he was encouraging himself. In the end, I want the general to look at the thin soldier, or to the camera, to make the sentence ‘much better’ more targeted or firmer.

This is my current Acting record:

Lighting: week02

HDRI

HDRI uses floating-point to directly store the brightness information of each pixel location, bright places can be very bright, dark places can be very dark, and can be used directly as a light source.

The LDR images are mostly 8bit quantized, and the brightness is compressed to 0-255, so the real brightness of a certain pixel position cannot be known. Forcibly adjusting the brightness will cause obvious quantization noise, so it cannot be directly used for lighting.

During the whole process, the only problem was that the projection material cannot be used, which I could not solve, so I finally used Arnold to make another projection material.

Self-made Projection material:

This is my final effect:

Fighting Sequence Previs: Week 03

This week I mainly did the modeling of female characters. According to the efficiency, I decided to modify and add some models on a basic human body model to achieve the effect quickly.

I found a Dva model on Sketchfab, which is very suitable. Because our concept design mainly refers to Overwatch. Using such a basic girl model can quickly grasp the overall shape. And there is a more convenient that the topo of this basic model is already very good, which can save time for Re-topology.

Firstly, I split the model into several parts, the face only needs to be refined, and the topo does not need to be changed. Then the hands, which can be used directly, and the main things to be modified are the body and legs. In addition, I also decomposed the facial features to make them easier to modify later.

I did the most modeling and adjustment in Zbrush. This is the model after modifying the body shape and topo.

About her hair, I found a very suitable Zb brush. In Zb, all the brushes with Cruve in the name can created geometries attach to the surface of the model, which allows me to quickly make good hair effects.

For the production of leg armor and underwear, I use the extract in Zb. By drawing the Mask on the surface of the model, I can get the model of the same shape. After simply adjusting the edges and wiring, I can get the rough shape.

There are also mask and facial metal details, which are also made in the same way.

Next, Jacket is very important. Considering the details of the folds of this jacket, I decided to learn to use Marvelous Designer. This software is really amazing! Simple and fast, this is the effect I finally achieved in MD.

Then modify some fold details in Zb and retopo, and finally simply add basic materials in Maya.

This is the final effect.

Motion capture, Matchmove, and Rotomation

Mocap: Setting up trackers on the key parts of the moving objects, and the Motion capture system captures the tracker’s position, then obtains the three-dimensional coordinate data after computer processing. At present, there are mainly optical motion capture technology, inertial motion capture technology. In production, optical motion capture is more common, because it has higher accuracy.

Matchmove: After the real shot, the computer graphics are matched with the motions, so that they can be imposed or interacted with the elements in the screen. There are two main types of Matchmove tracking, namely camera tracking (CameraTrack) and object tracking (ObjectTrack).

Camera tracking needs to convert the camera information of the real shot lens into digital information to the later software so that the real shot and the CG element camera position include motion matching. Just like we did in the 3DE.

In addition, there is object tracking. This type of object is relatively simple. Basically, there are only changes in distance and perspective, without too much self-deformation, such as license plates in driving vehicles and billboards in streets. But if you want to replace more complex deformable objects, such as characters, then Rotomation is needed.

Rotomation: It is mainly for character digital replacement, including erasure (elements such as steel wire and original character) and character performance tracking.

VFX vs Film vs Game Animation

VFX animation: It is mostly used in live-action movies, and the goal is mostly to achieve as real effects as possible, such as some natural disasters. In addition, when making non-realistic creatures, such as superman or monsters, the reality should be the goal to convince the audience. There are also applications of VFX in games, but that is a different production idea. In addition to proper style and dynamics, the performance also needs to be considered.

Film animation: Some purely animated films of Disney or Pixar may have real light and shadow and hair, but the characters and scenes are more stylized, and the performance animation is more dynamic.

game animation: In the game, there are many animations that are reused, most of which are looped animations, and the continuity between actions is more important, and it is necessary to provide timely and clear feedback to the player’s operations. This makes the game animation may not have so much stylized performance.

Previs, Postvis and Techvis

Previs is short for Previsualization. Some are also called PreViz for short, which actually means the same thing. So, what exactly is Previs? To put it bluntly, the lens to be shot is simply constructed with 3D software before the filming starts. The main purpose is to see the camera angle, camera position, composition, actor position, and so on.

Postvis. During filming, many green screens or other on-set shots were shot. Before these shots entered the production of various realistic special effects, it was necessary to see what the general effect of the special effects was added. For example, after a real shooting, the character Roto in the green screen follows the lens set by Previs and synthesizes it into a rough 3D environment. The purpose of this is to save manpower and financial resources. The cost of special effects production is too expensive. If you go directly to the special effects production stage without thinking and make mistakes, the loss will be great. So preview the special effects in advance. There is also the match between the actors and digital characters, textures, scenes, and so on in the previous shooting.

Almost all current blockbusters are inseparable from these steps. The same is true for Fulian 3. A total of 3000 Previs and Postvis lens productions were involved in Reunion 3. It was done by editor Jeff Ford, Third Floor’s senior Previs/Postvis director Gerardo Ramirez, and Marvel’s special effects director Dan DeLeeuw.

The Third Floor was responsible for the entire Previs production of Reunion 3 and Reunion 4, and they shot at the same time, so sometimes a lot of work has to be overlapped. Therefore, all production time needs to be arranged reasonably. In the early stage, Director Previs led the team to spend most of their time on creative aspects, such as Layout, camera, time, and animation. The artists first made a rough, untextured version of the shot. Then use the Previs editor to edit these shots together. During the editing process, clips and rhythms are tested, and occasionally some indications of event rhythms are created. After the artists saw the edited appearance, they focused on the animation, lighting, and special effects of these frames.

The role of Previs

Previs was made to help develop actions and tell stories, including visualized large settings and fighting scenes. So the Previs production team largely relied on the story, script, and rhythm specified by the director. First, edit the Previs footage for review, and then everyone sits down and brainstorms ideas about the story, with the purpose of seeing if the story point can be sublimated to a better level. Starting from Previs, the production team began to enumerate and draw the technical points and components needed in the lens, as well as what content will be involved in the real shot, and so on. The workflow also includes more specific tasks related to character development for the appearance of the environment, including power, fighting style, and widgets used.

Previs can also help determine which roles are required for each shot. There are more than 60 characters in the movie, and part of Previs determines which character should be in which shot, what they should do, and when they appear. The most difficult thing here is not the number and sheer scale of the characters, but how to use these characters throughout the story. There is also a close integration of Previs and Postvis with the live-action part.

Techvis

In addition, the production team also produced hundreds of Techvis charts for this film. Mainly provide technical solutions. It is how to realize the content shown in Previs. For example, orbital position, camera angle, green screen setting distance, point out the relationship between the Previs camera and the character or the path of the sun, how to shoot complex character actions, etc. For example, in the scene of Thor, the Rockets, and the Dwarves. The director hopes that the actors have mutual influence. Because the actor who plays the dwarf is very short, but the movie importantly portrays him as a giant. How to set up the camera is the main problem that Techvis solves. There is also the use of Techvis to help model and predict the best light and shadow positions for the shooting date and so on.

Fighting Sequence Previs: Week 02

Last week, everyone’s work was fairly smooth, and some drafts were made for the storyboard; a design drawing of a character was also completed.

According to the storyboard, the basic layout of the scene is also produced.

A visual effect test result is good.

This week I group the members again so that everyone can switch to try other tasks.

The environment has been basically determined, and Layout will use the current scene project to make camera animation.

In addition, about the solution of 2D motion blur. After I consulted Mehdi, I have a preliminary plan in Houdini. Analyzing the whole effect, combined with the second week tutorial of Houdini, I thought of this idea of ​​making the model change its trail.

First, there should be a part where you can select the vertices you want to deform, then you can get an offset direction by getting the position of these vertices in the previous frame, and finally set a control value to control the current position of these vertices and the position of the previous frame Blending (like Blendshape).

This is the current motion blur effect:

It can be seen that the way of selecting vertices by normal still needs improvement. I have learned that there is a node called Drawthemask in Houdini. I will further improve this 2D motion blur solution.