Solo Project:week1

Full-process animation creation based on UE4.

Regarding my personal project, I had some ideas in the second semester. When I was making the second semester’s cooperative project, I was considering using some of the assets at the time for the projects in the future. The project that always starts from 0 will spend a lot of time in the preliminary work, but my technical skills are difficult to improve (from the experience of my undergraduate).

My inspiration actually came from this video:

In this video, the speaker introduced the animation pipeline in Overwatch in detail. I am very interested in the process.

In fact, during this period, I gradually determined my future research and work direction. I will learn the technologies based on the real-time rendering engine, and with TA or TD as my career goal. Therefore, in this semester, I will create a personal project that is convenient for my future development. I will use a variety of DCC software to make assets, and finally render them in UE4, making shaders and materials in UE4. The final effect should be a series of animations similar to the play of the game animation or victory Pose animations in Overwatch.

In the first week, I mainly found some references and simply drew a concept picture of the character:

Collaborative Project: Week1

This semester I worked with Karl and Sean, and we all want to learn more about Unreal Engine 4. Trying to use ue4 to make an animated short film is a good way to learn. In the beginning, our inspiration comes from a small eyeball.

Karl showed us the eyeballs he made according to the tutorial. The effect is very shocking. The production of this eye comes from an official live broadcast of ue4:

Our thinking started to expand from how to use this eyeball. The first thing that came to mind was to make realistic characters. This is what Karl is good at and I am also very interested in it. In addition, what we think of is the Cthulhu element. I happened to be watching Tanabe’s comic “The Call of Cthulhu” recently. His works are exquisitely portrayed and realistic, and the open pages are very amazing, using the ultimate lens composition to show all kinds of indescribable objects shockingly to the readers.

So we imagined a short story plot:

The explorer discovered an abandoned village. The village was obviously abandoned for a long time. The interiors and buildings in the village have some elements of Cthulhu culture. There is a church in the village. The church has obviously been remodeled, and the gods once believed in are shattered, replaced by a statue of a terrifying Cthulhu monster, and the body of worshippers in the church. The explorer walks to the back of the church, and a strange sound comes from the dense forest. When he pushes aside the bushes, there is a huge eye oncoming him.

According to this story, I tried to make a few atmosphere pictures with PhotoShop.

Afterwards, I discussed with Karl and Sean to determine our production direction. Sean will mainly make some models of scenes and props. Karl will focus on the scenes in UE4, and I will be responsible for the production of characters.

Houdini: week05

Node:

Bend:This node lets you define a capture region around (or partially intersecting) a model, and apply deformations to the geometry inside the capture region.

Volumetric light:

When a light-shielding object is illuminated by a light source, the radioactivity of the light present around it leaks, which is called volume light. The lighting under this special effect gives people a visual sense of space compared to the lighting in previous games. Can provide more realistic and characteristic visual effects.

Volumetric light environment material:

Render:

Smoke&Flame&Explosion

Node:

Volume Rasterize Attributes: The Volume Rasterize Attributes SOP takes a cloud of points as input and creates VDBs for its float or vector attributes.

In Houdini, naming is very important. There are several names in making firework explosion special effects: Density, Temperature, Fuel, etc., which control different variables respectively.

Distance field:Creates a signed distance field (SDF). An SDF stores the distance to the surface in each voxel. (If the voxel is inside, the distance is negative.) After the distance field is calculated, the closest distance between any point in the field and all reference objects can be obtained.

Density field:Creates a density field. Voxels in the band on the surface store 1 and voxels outside store 0. The size of the filled band is controlled with the Interior Band Voxels. Turn on the Fill interior to create a solid VDB from an airtight surface instead of a narrow band.

Smokesolver

The smoke solver provides the basics of smoke simulation. If you just want to generate smoke, the smoke solver is useful since it is simpler and expert users can build their own extensions on it. However, the Pyro Solver is more flexible.

smokesolver_sparse

The Smoke Solver is able to perform the basic steps required for a smoke simulation. Pyro Solver (Sparse) extends the functionality of this solver by adding flame simulation along with extra shaping controls.

Pyrosolver_sparse

This node is an extension of the Smoke Solver (Sparse). It considers an extra simulation field (flame, which captures the presence of flames) and adds some extra shaping parameters to allow for more control over the emergent look.

Density:Smoke density

Temperature:It controls the speed of smoke diffusion, the hotter the faster

The nodes of flame
The nodes of explosion

These are my final effects.

Houdini: week04

Linear workflow

The linear workflow means that when the software calculates, all the data involved in the calculation are based on linearity. Almost all image processing software we use is based on linear calculations. That is, 1+1=2, 2*2=4, 128+128=256. The correct premise of this process is that all the materials, conditions, and lights before the calculation are linear data.

sRGB (standard Red Green Blue) is a color language protocol jointly developed by Microsoft imaging giants. Microsoft, together with Epson, HP, etc., provides a standard method to define colors, allowing various computer peripherals such as display, printing, and scanning to interact with Application software has a common language for color.

Gamma correction

It is to edit the gamma curve of the image to perform non-linear tone editing on the image to detect the dark part and the light part in the image signal, and increase the ratio of the two, thereby improving the image contrast effect. In the field of computer graphics, the conversion relationship curve between the output voltage of the screen and the corresponding brightness is used to be called the Gamma Curve.

In terms of the characteristics of the traditional CRT (Cathode Ray Tube) screen, the curve is usually a power function, Y=(X+e)γ, where Y is the brightness, X is the output voltage, e is the compensation coefficient, and the power The value (γ) is the gamma value. Changing the size of the power value (γ) can change the Gamma curve of the CRT. The typical Gamma value is 0.45, which makes the brightness of the CRT image appear linear. For display screens such as TV sets using CRTs, since the luminous grayscale of the input signal is not a linear function but an exponential function, it must be corrected.

ACES

Academy Color Coding System is an open color management and interchange system developed by the American Academy of Motion Picture Arts and Sciences (AMPAS) and industry partners.

the depth of color

In the field of computer graphics, color depth represents the number of bits used to store 1 pixel of color in a bitmap or video frame buffer. It is also called bit/pixel (bpp). The higher the color depth, the more colors are available.

The color depth is described by “n-bit color”. If the color depth is n bits, there are 2n color choices, and the number of bits used to store each pixel is n.

Light type

Mantra&Arnold

After the learning experience in this lesson, I think that in the rendering speed of simple scenes, the speed of Arnold is much higher than that of mantra, but in terms of experience, especially the processing of textures by Arnold, the experience is not very good, repeated rendering and rendering is not good The problem of displaying textures recurred during my operation. Of course, I spent a lot of time on a UV problem. Houdini can’t create plane UV automatically. I remembered: UV is important to Texture

Some people on the Internet provided a comparison of the two renderers in terms of caustics. Mantra is significantly better than Arnold in this respect. In general, the two renderers have their own advantages and disadvantages, but I tend to use Arnold and hope that Htoa can be updated.

Mantra Arnold

Here are some of my renderings:

Arnold

Mantra

Motion blur:

Acting Feedback

This is the clip I choosed:

It can be seen that there is no very exaggerated performance in the film, which is also a big difference between the live-action film and the animation. In a live-action movie, the actor and the director create this role together. There are many uncontrollable factors, such as micro-expressions or detailed actions. But in animation, all elements in the screen are controlled by creators, so as an animator, we should consider the information conveyed by all content.

Exaggerated poses and motion ranges are kind of a common technique in animation. From this perspective, I divided the entire performance into three parts.

  • Passionate speech

We are going to win this war, because we have the best man!

In this line, the most important thing is the change in the character’s mood, from the impassioned ness at the beginning to the daze. I added the gesture of waving and raising the head. In order to better guide the audience’s sight, the character always performs with only one hand. I chose the word “best” as the emotional turning point. Although the voice changes in the word “man”, the general needs to spot this thin soldier in advance. The emotional change will take some time to transition.

  • Daze

After the general’s mood changed, there was a period of sluggishness, and in this part, I decided to deepen his disappointment. No dialogue means I can put more performance on the body and expression of the character. So I asked the thin soldier to give the general an embarrassed smile. The soldier tried to relieve the embarrassment, but it made the general’s disappointment worse. Here the general made a more popular disappointment, that is, he wiped his face and exaggerated his body. Language can bring exaggerated expressions.

  • Cheer up

And because they are going to get better, much better.

The general cheered himself up again after a disappointment. From disappointment to cheer up, there was a process of raising his breath. The general who had been bent over stood up again here, but no longer looked up, lowered his head to speak, and waved his hands, as if he was encouraging himself. In the end, I want the general to look at the thin soldier, or to the camera, to make the sentence ‘much better’ more targeted or firmer.

This is my current Acting record:

Lighting: week02

HDRI

HDRI uses floating-point to directly store the brightness information of each pixel location, bright places can be very bright, dark places can be very dark, and can be used directly as a light source.

The LDR images are mostly 8bit quantized, and the brightness is compressed to 0-255, so the real brightness of a certain pixel position cannot be known. Forcibly adjusting the brightness will cause obvious quantization noise, so it cannot be directly used for lighting.

During the whole process, the only problem was that the projection material cannot be used, which I could not solve, so I finally used Arnold to make another projection material.

Self-made Projection material:

This is my final effect:

Motion capture, Matchmove, and Rotomation

Mocap: Setting up trackers on the key parts of the moving objects, and the Motion capture system captures the tracker’s position, then obtains the three-dimensional coordinate data after computer processing. At present, there are mainly optical motion capture technology, inertial motion capture technology. In production, optical motion capture is more common, because it has higher accuracy.

Matchmove: After the real shot, the computer graphics are matched with the motions, so that they can be imposed or interacted with the elements in the screen. There are two main types of Matchmove tracking, namely camera tracking (CameraTrack) and object tracking (ObjectTrack).

Camera tracking needs to convert the camera information of the real shot lens into digital information to the later software so that the real shot and the CG element camera position include motion matching. Just like we did in the 3DE.

In addition, there is object tracking. This type of object is relatively simple. Basically, there are only changes in distance and perspective, without too much self-deformation, such as license plates in driving vehicles and billboards in streets. But if you want to replace more complex deformable objects, such as characters, then Rotomation is needed.

Rotomation: It is mainly for character digital replacement, including erasure (elements such as steel wire and original character) and character performance tracking.

VFX vs Film vs Game Animation

VFX animation: It is mostly used in live-action movies, and the goal is mostly to achieve as real effects as possible, such as some natural disasters. In addition, when making non-realistic creatures, such as superman or monsters, the reality should be the goal to convince the audience. There are also applications of VFX in games, but that is a different production idea. In addition to proper style and dynamics, the performance also needs to be considered.

Film animation: Some purely animated films of Disney or Pixar may have real light and shadow and hair, but the characters and scenes are more stylized, and the performance animation is more dynamic.

game animation: In the game, there are many animations that are reused, most of which are looped animations, and the continuity between actions is more important, and it is necessary to provide timely and clear feedback to the player’s operations. This makes the game animation may not have so much stylized performance.

Previs, Postvis and Techvis

Previs is short for Previsualization. Some are also called PreViz for short, which actually means the same thing. So, what exactly is Previs? To put it bluntly, the lens to be shot is simply constructed with 3D software before the filming starts. The main purpose is to see the camera angle, camera position, composition, actor position, and so on.

Postvis. During filming, many green screens or other on-set shots were shot. Before these shots entered the production of various realistic special effects, it was necessary to see what the general effect of the special effects was added. For example, after a real shooting, the character Roto in the green screen follows the lens set by Previs and synthesizes it into a rough 3D environment. The purpose of this is to save manpower and financial resources. The cost of special effects production is too expensive. If you go directly to the special effects production stage without thinking and make mistakes, the loss will be great. So preview the special effects in advance. There is also the match between the actors and digital characters, textures, scenes, and so on in the previous shooting.

Almost all current blockbusters are inseparable from these steps. The same is true for Fulian 3. A total of 3000 Previs and Postvis lens productions were involved in Reunion 3. It was done by editor Jeff Ford, Third Floor’s senior Previs/Postvis director Gerardo Ramirez, and Marvel’s special effects director Dan DeLeeuw.

The Third Floor was responsible for the entire Previs production of Reunion 3 and Reunion 4, and they shot at the same time, so sometimes a lot of work has to be overlapped. Therefore, all production time needs to be arranged reasonably. In the early stage, Director Previs led the team to spend most of their time on creative aspects, such as Layout, camera, time, and animation. The artists first made a rough, untextured version of the shot. Then use the Previs editor to edit these shots together. During the editing process, clips and rhythms are tested, and occasionally some indications of event rhythms are created. After the artists saw the edited appearance, they focused on the animation, lighting, and special effects of these frames.

The role of Previs

Previs was made to help develop actions and tell stories, including visualized large settings and fighting scenes. So the Previs production team largely relied on the story, script, and rhythm specified by the director. First, edit the Previs footage for review, and then everyone sits down and brainstorms ideas about the story, with the purpose of seeing if the story point can be sublimated to a better level. Starting from Previs, the production team began to enumerate and draw the technical points and components needed in the lens, as well as what content will be involved in the real shot, and so on. The workflow also includes more specific tasks related to character development for the appearance of the environment, including power, fighting style, and widgets used.

Previs can also help determine which roles are required for each shot. There are more than 60 characters in the movie, and part of Previs determines which character should be in which shot, what they should do, and when they appear. The most difficult thing here is not the number and sheer scale of the characters, but how to use these characters throughout the story. There is also a close integration of Previs and Postvis with the live-action part.

Techvis

In addition, the production team also produced hundreds of Techvis charts for this film. Mainly provide technical solutions. It is how to realize the content shown in Previs. For example, orbital position, camera angle, green screen setting distance, point out the relationship between the Previs camera and the character or the path of the sun, how to shoot complex character actions, etc. For example, in the scene of Thor, the Rockets, and the Dwarves. The director hopes that the actors have mutual influence. Because the actor who plays the dwarf is very short, but the movie importantly portrays him as a giant. How to set up the camera is the main problem that Techvis solves. There is also the use of Techvis to help model and predict the best light and shadow positions for the shooting date and so on.

Houdini:Week 3

Node:

Primitive: In Houdini, primitives refer to a unit of geometry, lower-level than an object but above points.

Voronoi: It is a division of the space plane. Its characteristic is that any position in the polygon is the closest to the sample point (such as the residential point) of the polygon, and the distance from the sample point in the adjacent polygon is far, and each The polygon contains only one sample point. Due to the equipartition characteristics of the Thiessen polygon in the spatial division, it can be used to solve problems such as the nearest point, the smallest closed circle, and many spatial analysis problems, such as adjacency, proximity, and accessibility analysis.

Point from volume: It is used to generate a regular set of points that fill a given volume.

VDB from polygons: This node can create a distance field (signed (SDF) or unsigned) and/or a density (fog) field.

VDB: Compared with common bai model data du vertices, edges, faces (essentially vertices and indexes), as well as the recording method of normal dao lines, zhuanvdb is more suitable for objects with complex shapes such as smoke and fluid. Its recording method It divides a group of objects into many small square units, and then records the density, speed, and various custom attribute values ​​for each unit.

Voxel: Voxel is the abbreviation of Volume Pixel. The stereo containing voxels can be represented by stereo rendering or polygonal isosurface with a given threshold contour.

Remesh: This node tries to maximize the smallest angle in each triangle. And we can “harden” certain edges to make the remesher preserve them. This is useful for keeping sharp corners and preserving seams.

Rest: This node creates an attribute which causes material textures to stick to surfaces deformed using other operations.

Attribrandomize: This node generates random values to create or modify an attribute.

Create reference copy: Create a node that takes all the attributes of the current node as a reference.

About making broken animation, it is divided into two parts: making a broken model and simulation. Mehdi introduced three ways to make a fracture model.

  • voronoi fracture
  • boolean,like cutting something
  • rbdmaterialfracture
  • Quickly make wood crushing effect:

Simulation:

It mainly uses the rigid body solver in dopnet. It consists of two important parts: rbdsolver and bulletrbdsolver. Here we can just use bulletrbdsolver.

Assemble: The assemble node cleans up the geometry. The following screen captures show what the geometry looks like before using the assemble tool, and after.

Create packed geometry: If the output geometry contains a primitive attribute called “name”, a packed fragment will be created for each unique value of the name attribute. Additionally, a point attribute called “name” will be created to identify the piece that packed fragment contains.

Use proxy during simulation to speed up.

Ways to establish a proxy:

Transform Pieces: This node can be used in combination with a DOP Import node in Create Points to Represent Objects mode to transform the results of a multi-piece RBD simulation.

This can apply the results of proxy simulation to the original complex model.

Make woodcabin crushing effect:

Apply the previous method of quickly making wood shattering effect in the woodcabin.

This is my final result: