Acting Feedback

This is the clip I choosed:

It can be seen that there is no very exaggerated performance in the film, which is also a big difference between the live-action film and the animation. In a live-action movie, the actor and the director create this role together. There are many uncontrollable factors, such as micro-expressions or detailed actions. But in animation, all elements in the screen are controlled by creators, so as an animator, we should consider the information conveyed by all content.

Exaggerated poses and motion ranges are kind of a common technique in animation. From this perspective, I divided the entire performance into three parts.

  • Passionate speech

We are going to win this war, because we have the best man!

In this line, the most important thing is the change in the character’s mood, from the impassioned ness at the beginning to the daze. I added the gesture of waving and raising the head. In order to better guide the audience’s sight, the character always performs with only one hand. I chose the word “best” as the emotional turning point. Although the voice changes in the word “man”, the general needs to spot this thin soldier in advance. The emotional change will take some time to transition.

  • Daze

After the general’s mood changed, there was a period of sluggishness, and in this part, I decided to deepen his disappointment. No dialogue means I can put more performance on the body and expression of the character. So I asked the thin soldier to give the general an embarrassed smile. The soldier tried to relieve the embarrassment, but it made the general’s disappointment worse. Here the general made a more popular disappointment, that is, he wiped his face and exaggerated his body. Language can bring exaggerated expressions.

  • Cheer up

And because they are going to get better, much better.

The general cheered himself up again after a disappointment. From disappointment to cheer up, there was a process of raising his breath. The general who had been bent over stood up again here, but no longer looked up, lowered his head to speak, and waved his hands, as if he was encouraging himself. In the end, I want the general to look at the thin soldier, or to the camera, to make the sentence ‘much better’ more targeted or firmer.

This is my current Acting record:

Lighting: week02

HDRI

HDRI uses floating-point to directly store the brightness information of each pixel location, bright places can be very bright, dark places can be very dark, and can be used directly as a light source.

The LDR images are mostly 8bit quantized, and the brightness is compressed to 0-255, so the real brightness of a certain pixel position cannot be known. Forcibly adjusting the brightness will cause obvious quantization noise, so it cannot be directly used for lighting.

During the whole process, the only problem was that the projection material cannot be used, which I could not solve, so I finally used Arnold to make another projection material.

Self-made Projection material:

This is my final effect:

Fighting Sequence Previs: Week 03

This week I mainly did the modeling of female characters. According to the efficiency, I decided to modify and add some models on a basic human body model to achieve the effect quickly.

I found a Dva model on Sketchfab, which is very suitable. Because our concept design mainly refers to Overwatch. Using such a basic girl model can quickly grasp the overall shape. And there is a more convenient that the topo of this basic model is already very good, which can save time for Re-topology.

Firstly, I split the model into several parts, the face only needs to be refined, and the topo does not need to be changed. Then the hands, which can be used directly, and the main things to be modified are the body and legs. In addition, I also decomposed the facial features to make them easier to modify later.

I did the most modeling and adjustment in Zbrush. This is the model after modifying the body shape and topo.

About her hair, I found a very suitable Zb brush. In Zb, all the brushes with Cruve in the name can created geometries attach to the surface of the model, which allows me to quickly make good hair effects.

For the production of leg armor and underwear, I use the extract in Zb. By drawing the Mask on the surface of the model, I can get the model of the same shape. After simply adjusting the edges and wiring, I can get the rough shape.

There are also mask and facial metal details, which are also made in the same way.

Next, Jacket is very important. Considering the details of the folds of this jacket, I decided to learn to use Marvelous Designer. This software is really amazing! Simple and fast, this is the effect I finally achieved in MD.

Then modify some fold details in Zb and retopo, and finally simply add basic materials in Maya.

This is the final effect.

Motion capture, Matchmove, and Rotomation

Mocap: Setting up trackers on the key parts of the moving objects, and the Motion capture system captures the tracker’s position, then obtains the three-dimensional coordinate data after computer processing. At present, there are mainly optical motion capture technology, inertial motion capture technology. In production, optical motion capture is more common, because it has higher accuracy.

Matchmove: After the real shot, the computer graphics are matched with the motions, so that they can be imposed or interacted with the elements in the screen. There are two main types of Matchmove tracking, namely camera tracking (CameraTrack) and object tracking (ObjectTrack).

Camera tracking needs to convert the camera information of the real shot lens into digital information to the later software so that the real shot and the CG element camera position include motion matching. Just like we did in the 3DE.

In addition, there is object tracking. This type of object is relatively simple. Basically, there are only changes in distance and perspective, without too much self-deformation, such as license plates in driving vehicles and billboards in streets. But if you want to replace more complex deformable objects, such as characters, then Rotomation is needed.

Rotomation: It is mainly for character digital replacement, including erasure (elements such as steel wire and original character) and character performance tracking.

VFX vs Film vs Game Animation

VFX animation: It is mostly used in live-action movies, and the goal is mostly to achieve as real effects as possible, such as some natural disasters. In addition, when making non-realistic creatures, such as superman or monsters, the reality should be the goal to convince the audience. There are also applications of VFX in games, but that is a different production idea. In addition to proper style and dynamics, the performance also needs to be considered.

Film animation: Some purely animated films of Disney or Pixar may have real light and shadow and hair, but the characters and scenes are more stylized, and the performance animation is more dynamic.

game animation: In the game, there are many animations that are reused, most of which are looped animations, and the continuity between actions is more important, and it is necessary to provide timely and clear feedback to the player’s operations. This makes the game animation may not have so much stylized performance.

Previs, Postvis and Techvis

Previs is short for Previsualization. Some are also called PreViz for short, which actually means the same thing. So, what exactly is Previs? To put it bluntly, the lens to be shot is simply constructed with 3D software before the filming starts. The main purpose is to see the camera angle, camera position, composition, actor position, and so on.

Postvis. During filming, many green screens or other on-set shots were shot. Before these shots entered the production of various realistic special effects, it was necessary to see what the general effect of the special effects was added. For example, after a real shooting, the character Roto in the green screen follows the lens set by Previs and synthesizes it into a rough 3D environment. The purpose of this is to save manpower and financial resources. The cost of special effects production is too expensive. If you go directly to the special effects production stage without thinking and make mistakes, the loss will be great. So preview the special effects in advance. There is also the match between the actors and digital characters, textures, scenes, and so on in the previous shooting.

Almost all current blockbusters are inseparable from these steps. The same is true for Fulian 3. A total of 3000 Previs and Postvis lens productions were involved in Reunion 3. It was done by editor Jeff Ford, Third Floor’s senior Previs/Postvis director Gerardo Ramirez, and Marvel’s special effects director Dan DeLeeuw.

The Third Floor was responsible for the entire Previs production of Reunion 3 and Reunion 4, and they shot at the same time, so sometimes a lot of work has to be overlapped. Therefore, all production time needs to be arranged reasonably. In the early stage, Director Previs led the team to spend most of their time on creative aspects, such as Layout, camera, time, and animation. The artists first made a rough, untextured version of the shot. Then use the Previs editor to edit these shots together. During the editing process, clips and rhythms are tested, and occasionally some indications of event rhythms are created. After the artists saw the edited appearance, they focused on the animation, lighting, and special effects of these frames.

The role of Previs

Previs was made to help develop actions and tell stories, including visualized large settings and fighting scenes. So the Previs production team largely relied on the story, script, and rhythm specified by the director. First, edit the Previs footage for review, and then everyone sits down and brainstorms ideas about the story, with the purpose of seeing if the story point can be sublimated to a better level. Starting from Previs, the production team began to enumerate and draw the technical points and components needed in the lens, as well as what content will be involved in the real shot, and so on. The workflow also includes more specific tasks related to character development for the appearance of the environment, including power, fighting style, and widgets used.

Previs can also help determine which roles are required for each shot. There are more than 60 characters in the movie, and part of Previs determines which character should be in which shot, what they should do, and when they appear. The most difficult thing here is not the number and sheer scale of the characters, but how to use these characters throughout the story. There is also a close integration of Previs and Postvis with the live-action part.

Techvis

In addition, the production team also produced hundreds of Techvis charts for this film. Mainly provide technical solutions. It is how to realize the content shown in Previs. For example, orbital position, camera angle, green screen setting distance, point out the relationship between the Previs camera and the character or the path of the sun, how to shoot complex character actions, etc. For example, in the scene of Thor, the Rockets, and the Dwarves. The director hopes that the actors have mutual influence. Because the actor who plays the dwarf is very short, but the movie importantly portrays him as a giant. How to set up the camera is the main problem that Techvis solves. There is also the use of Techvis to help model and predict the best light and shadow positions for the shooting date and so on.

Fighting Sequence Previs: Week 02

Last week, everyone’s work was fairly smooth, and some drafts were made for the storyboard; a design drawing of a character was also completed.

According to the storyboard, the basic layout of the scene is also produced.

A visual effect test result is good.

This week I group the members again so that everyone can switch to try other tasks.

The environment has been basically determined, and Layout will use the current scene project to make camera animation.

In addition, about the solution of 2D motion blur. After I consulted Mehdi, I have a preliminary plan in Houdini. Analyzing the whole effect, combined with the second week tutorial of Houdini, I thought of this idea of ​​making the model change its trail.

First, there should be a part where you can select the vertices you want to deform, then you can get an offset direction by getting the position of these vertices in the previous frame, and finally set a control value to control the current position of these vertices and the position of the previous frame Blending (like Blendshape).

This is the current motion blur effect:

It can be seen that the way of selecting vertices by normal still needs improvement. I have learned that there is a node called Drawthemask in Houdini. I will further improve this 2D motion blur solution.

Houdini:Week 3

Node:

Primitive: In Houdini, primitives refer to a unit of geometry, lower-level than an object but above points.

Voronoi: It is a division of the space plane. Its characteristic is that any position in the polygon is the closest to the sample point (such as the residential point) of the polygon, and the distance from the sample point in the adjacent polygon is far, and each The polygon contains only one sample point. Due to the equipartition characteristics of the Thiessen polygon in the spatial division, it can be used to solve problems such as the nearest point, the smallest closed circle, and many spatial analysis problems, such as adjacency, proximity, and accessibility analysis.

Point from volume: It is used to generate a regular set of points that fill a given volume.

VDB from polygons: This node can create a distance field (signed (SDF) or unsigned) and/or a density (fog) field.

VDB: Compared with common bai model data du vertices, edges, faces (essentially vertices and indexes), as well as the recording method of normal dao lines, zhuanvdb is more suitable for objects with complex shapes such as smoke and fluid. Its recording method It divides a group of objects into many small square units, and then records the density, speed, and various custom attribute values ​​for each unit.

Voxel: Voxel is the abbreviation of Volume Pixel. The stereo containing voxels can be represented by stereo rendering or polygonal isosurface with a given threshold contour.

Remesh: This node tries to maximize the smallest angle in each triangle. And we can “harden” certain edges to make the remesher preserve them. This is useful for keeping sharp corners and preserving seams.

Rest: This node creates an attribute which causes material textures to stick to surfaces deformed using other operations.

Attribrandomize: This node generates random values to create or modify an attribute.

Create reference copy: Create a node that takes all the attributes of the current node as a reference.

About making broken animation, it is divided into two parts: making a broken model and simulation. Mehdi introduced three ways to make a fracture model.

  • voronoi fracture
  • boolean,like cutting something
  • rbdmaterialfracture
  • Quickly make wood crushing effect:

Simulation:

It mainly uses the rigid body solver in dopnet. It consists of two important parts: rbdsolver and bulletrbdsolver. Here we can just use bulletrbdsolver.

Assemble: The assemble node cleans up the geometry. The following screen captures show what the geometry looks like before using the assemble tool, and after.

Create packed geometry: If the output geometry contains a primitive attribute called “name”, a packed fragment will be created for each unique value of the name attribute. Additionally, a point attribute called “name” will be created to identify the piece that packed fragment contains.

Use proxy during simulation to speed up.

Ways to establish a proxy:

Transform Pieces: This node can be used in combination with a DOP Import node in Create Points to Represent Objects mode to transform the results of a multi-piece RBD simulation.

This can apply the results of proxy simulation to the original complex model.

Make woodcabin crushing effect:

Apply the previous method of quickly making wood shattering effect in the woodcabin.

This is my final result:

Lighting: Introduction

Pick 3 frames from one of your favorite films and try to identify the lighting techniques we learned.

《The Prestige》  Low-Key lighting. Hard light. A strong contrast. Robert is greeting the audience, the light in front of him is in sharp contrast with the darkness behind him.

《The Dark Knight》   Color Contrast, Hard light. Blue sky and yellow fire, Batman’s body shape is outlined by the light, and his lowness is clearly expressed.

《Memento》 Color, Three-Point Lighting. The audience’s attention would be focused on the character’s expression by lighting. At the same time, as the film progresses, the audience will gradually realize the difference between the timeline expressed in black and white and color.

Houdini:Week 02

The Houdini class this week has a lot of content, mainly introducing the usage of some nodes and the basic production of simulation. I will organize it into two parts.

Part 1:Node

Blast:Blast is designed to remove geometry that you select interactively in the viewport, as opposed to Delete node which is a more procedural tool.

Mountain: This operator uses normal attributes on the input geometry. If a normal has 0 lengths, this operator will not displace the point along that normal. This node can make the plane produce the effect of undulating mountains.

Attribwrangle: This is a very powerful, low-level node that lets experts who are familiar with VEX tweak attributes using code.

To use this node, you need to have a certain understanding of the VEX programming language. The convenience is that this node provides some simple effect codes.

AttribVOP: Double-click this node to build a VOP network inside. This node modifies geometry attributes. Compared with Attribwrange node, it is easier to learn, it uses a VOP network.

The P attribute can be understood as the coordinate value of each point, and the change in the shape of the model has been obtained by adding some noise values. The Bind node represents the binding of the input geometry to the vex function. For example, if you bind force you can pick up the attribute called force from your input geometry.

Part2:Simulation

After I finish the simulation, I think the whole process can be divided into three parts.

The first is selection. We have to select the part where we want particles to appear. This part can use the Delete node to select the part we want by controlling the normal range.

Of course, we can also perform other operations before this, such as scatter node, used to generate random points on the surface of the model, which can control the number of selected points. You can also increase the PointVelocity attribute.

After this is done, you can create a null node for output data, and the appearance of this node is also easy to find.

The second part of the simulation is mainly focused on a node called Popnet, which belongs to the DOP Network node. The DOP Network Object contains a DOP simulation. Simulations can consist of any number of simulation objects (which do not correspond to Objects that appear in /obj). These simulation objects are built and controlled by the DOP nodes contained in this node.

The pop source refers to the point we just selected, and the popsolver is the solver, which is used to calculate the motion effect of the final generated particles. Before them, we can add many different force to change the motion of particles. And Popobject converts a regular particle system into a dynamic object capable of interacting correctly with other objects in the DOP environment.

The calculated result can be cached by the filecache node for easy viewing.

But it hasn’t been completed yet, and nothing can be seen in rendering. Therefore, the third step is to add particles.

Here you need to use the Copytopoint node, it can copy geometry in the first input onto the points of the second input. Before that, we can also add size and color changes to the particles. Here we use the attribVOP node mentioned earlier.

Fit Range Node can take the value in the source range and shifts it to the corresponding value in the destination range. This operator takes the value in the source range (srcmin, srcmax) and shifts it to the corresponding value in the destination range (destmin, destmax). Values outside the input range will be clamped to the input range.

Here, age as the value, and life is the source value, which allows the properties of the particles to change with the change of age without exceeding their own life. Then by adjusting the destination min&max, the particle size can be changed. The same is true for changing the color. You only need to add another ramp node to make the particles have the effect of changing the color over time.