This semester I worked with Karl and Sean, and we all want to learn more about Unreal Engine 4. Trying to use ue4 to make an animated short film is a good way to learn. In the beginning, our inspiration comes from a small eyeball.
Karl showed us the eyeballs he made according to the tutorial. The effect is very shocking. The production of this eye comes from an official live broadcast of ue4:
Our thinking started to expand from how to use this eyeball. The first thing that came to mind was to make realistic characters. This is what Karl is good at and I am also very interested in it. In addition, what we think of is the Cthulhu element. I happened to be watching Tanabe’s comic “The Call of Cthulhu” recently. His works are exquisitely portrayed and realistic, and the open pages are very amazing, using the ultimate lens composition to show all kinds of indescribable objects shockingly to the readers.
So we imagined a short story plot:
The explorer discovered an abandoned village. The village was obviously abandoned for a long time. The interiors and buildings in the village have some elements of Cthulhu culture. There is a church in the village. The church has obviously been remodeled, and the gods once believed in are shattered, replaced by a statue of a terrifying Cthulhu monster, and the body of worshippers in the church. The explorer walks to the back of the church, and a strange sound comes from the dense forest. When he pushes aside the bushes, there is a huge eye oncoming him.
According to this story, I tried to make a few atmosphere pictures with PhotoShop.
Afterwards, I discussed with Karl and Sean to determine our production direction. Sean will mainly make some models of scenes and props. Karl will focus on the scenes in UE4, and I will be responsible for the production of characters.
The development process of this Previz cooperation project has always surprised me. At first, Sean and I thought it was just two of us working with a classmate from VFX, and then to develop a short story, mainly to create an atmosphere. A short Previz. But as more students joined this group, the content of this project gradually expanded, and eventually developed into a Previz with a short plot of one and a half minutes, with character design, modeling, binding, animation, special effects, and sound, almost full-process animation production.
In the beginning, we tried to discuss our progress once a week. This is our initial division of labor.
Firstly I worked with Yufei to make the girl. He was responsible for drawing design drawings, and I was responsible for modeling. Yufei has a wealth of work experience. He draws characters quickly and amazing, which greatly helps me. Follow-up modeling work. After that, I conducted a binding study with Murray. He is very good at communication and promptly asks any questions. Jay, I mainly communicate with him in the part of the special effects. He is very proficient in Cinema4D. At that time, I asked him some questions about C4d outside of the project, and he also patiently gave me answers and demonstrations. Although classmate Layne is in China and has an eight-hour time difference with us, he is also very serious in his work. He has drawn a lot of shots and tried his best to express every detail clearly. And after the final film, he produced very good music and sound effects, making our whole Previz look more complete. Although Evanna of VFX doesn’t talk much, she is very efficient in communication, expresses clearly, understands quickly, and has been completed on time in production, and she has her own unique ideas. Sean is a very good partner. He is passionate about this project. He often communicates with me the follow-up ideas and arrangements based on the progress of the project. He has unique insights into story expression and animation.
Later, as the work progressed and everyone’s work was basically fixed, we were in separate contact with the people responsible for related content, and then Sean and I organized and coordinated everyone’s animation special effects content.
In this process, we have been using onedrive for collaboration, anyone can upload and freely modify the assets in onedrive.
I think the biggest trouble I encountered during the process was how to distribute the work content. In the beginning, everyone has their own pre-work content, such as researching rigging, designing character and environment, researching special effects, and so on. But when it comes to the later stage, it is easy for someone to be busy but others not to work. So considering that there are many students majoring in 3D animation in the group, I created 6 scene files while making the layout, corresponding to the six students with different workloads, segmented the clips of different lengths through the cameras, and then the members who worked more make short-cut animations, and those with less work make long-segment animations. The final result is good, however, this also brings a problem, that is everyone’s animation production habits and styles are different, and the animation between clips may sometimes have inconsistencies, such as the movement of the previous shot and the next shot. Regarding this problem, I later modified some shots to block some of these problems.
For example:
This is the action at the junction of different segments made by two students. Direct connection is not possible. So I modified the shots, these are the results.
This project has also greatly improved my personal ability. In the preliminary work, I learned to make character models. This used Zbrush, Maya, and MarvelousDesigner. Among them, MarvelousDesigner is new to me. Practice really has a great effect on learning efficiency. Although the model eventually had somebody structure errors, considering the production time, I am still quite satisfied with the final effect of this model.
In addition to modeling, there is also rigging. In this project, I participated in rigging these two characters. I learned advSkeleton5, especially the facial expression rigging and the rigging of clothes, which I was unfamiliar with before. This collaborative project allowed me to learn this knowledge and skills. At the same time, the initial goal of my participation in the project has basically been achieved, and the 2D motion blur effect I want to study has also made great progress. With the help of Mehdi, I implemented Houdini to add movement to the animation.
Coming back to this project, after I understand the workflow and how to make it, the part that comes to my mind is animation and animation techniques. Modeling and rigging are also very interesting, but what I want to learn more is the animation part, how to make a better animation, how to use some techniques to make the animation look better and more stylized. The technique and animation performance will be my next step. Technology and art can complement each other. Sometimes the need for art brings new technology, and sometimes new technology can also provide unique art effects. In the next semester, I’ll learn more animation.
This is our final Previz:
PS: I am very grateful to Yu and Lucy, who have been giving us suggestions during the production of our project. It is very pleasant to learn and listen to music together hahaha
This week is the final special effects and animation editing work. I received special effects sequence frames and abc files made by Jay and Evanna.
Abc files can be directly imported into the scene using Cache->Alembic Cache, and then adjust the position and size to match.
For the effects of smoke and explosion, I imported the sequence frame into Maya as the texture, then assigned it to the plane, and adjusted the frame offset.
Finally, I exported all the animations made by my groupmembers with Playblast, edited them in Pr and matched their colors. After that, Jay added some special effects such as gunfire and glitch effects, and then the sound effects were made by Layne. I would also like to thank Frank. His Japanese line is so cool!
This week I mainly produced part of the animation that I am responsible for.
This is the layout:
The most important part of this is the backflip of the girl. This is the reference I found:
There are also many conversions between IK and FK. First, when the girl steps on the man’s chest, the feet’ IK is converted to FK, and when it is landed, it is converted from FK to IK. At the same time, because the knife in the female character’s hand is held by a man, the girl’s right hand has to be changed to IK, and then switched to FK again after leaving.
This week I helped Murray and Sean create and modify the rigging of the male character. Our missions are like, Sean is responsible for the rigging of the man’s robotic arm, while Murray and I are mainly responsible for the rigging of the character’s body and face.
Body Rigging
This part of the operation is basically the same as that of rigging that female character, but it is worth noting that the character’s clothes have a long hem. This part cannot be simply bound to the legs, it looks very strange. Considering the previous method of using influence, I think that the skeleton that follows the Hips part can be directly created here. In this way, in the early stage of rigging, it is possible to determine which parts of the clothes are controlled.
This is the final joints:
Because the metal arm is bound separately, the left arm joint automatically generated by advskeleton5 needs to be deleted. This is the actual skinning part:
After modifying the skin weight, (this process is really troublesome)
we can use copy skin weight and parent constraint to bind some props of the character.
Face Rigging
The face of this model is also a difficult problem. First, it is a non-symmetrical face, but the mouth is still symmetrical, and the left eye should not be bound.
After realizing these problems, I found that the previous method of binding female character expressions cannot be used directly, but to enable Non-symmetrical in advskeleton5. After that, we will get the left and right columns, and we need to select the vertex or edge of the left and right faces respectively.
Although the left eye does not need to be bound, advskeleton5 still needs to select some basic elements of the left face. After selection, it will still be blocked by the left metal eye, so there is no problem.
Finally, the robotic arm and color materials are added.
This week Jay and Layne finished the storyboarding part, and they gave me a storyboard to make the layout.
Considering the time, I decided to allocate the clips made by everyone in the process of making the layout. I imported the simple bound model into the scene and created six scene files, corresponding to six 3D computer animation students in our group.
It took a long time to make, but I also learned a lot about the attributes of Maya cameras. The one I use the most is Focal Length. It can also be used in the Attribute Editor of the camera. The focal length of the camera. Increasing the Focal Length can zoom in on the camera lens and enlarge the size of the object in the camera view. Decrease the Focal Length to zoom out the camera lens and reduce the size of the object in the camera view.
In addition, there are Near Clip Plane and Far Clip Plane. This property can control the camera’s cropping range. It should be noted that if the distance between the near clipping plane and the far clipping plane is much larger than the distance required to include all objects in the scene, the image quality of some objects may be poor.
This week I mainly made the bind and skin of the girl model, which are mainly divided into body rigging, face rigging, and cloth rigging.
Regarding the body rigging, I have several important functions to record:
Mirror skin weight: When modifying the weight of a symmetrical model, you only need to modify one side and copy it through this function, which is very fast.
Hammer skin weight:In the process of modifying the weight, especially the joint part, the problem of incorrect weight is often encountered. At this time, this function can easily modify the weight of the selected part to make the movement inside the joint look more reasonable.
Smooth skin weight: This smoothing function is better for smooth operability than directly using a brush, and is more suitable for handling deformations outside the joints.
Copy skin weight: Because the model has many equipment attached to the body, especially the legs, it will be troublesome to apply the weight separately, so this function can easily copy the weight of the internal body to the body equipment.
Face Rigging:
AdvSkeleton5 provides a very convenient process for facial binding. What needs attention here is the location of each part. Because the wiring of the face of this model is very good, it saves a lot of time for face rigging.
Another problem is that the face of this model has some accessories, so the weight of the face needs to be redrawn.
Cloth Rigging:
Because the clothes are bound to the body, the clothes move with the body. At this time, if you want to control the clothes individually, in addition to creating a new control bone separately, you also need to use the add influence function.
Polygon Smoothness
Specify the matching accuracy of the smooth skin point and the given polygon to affect the object. The larger the value, the more rounded the deformation effect. Set a value between 0.0 and 50.0. The default value is 0.0.
NURBS Samples
Specifies the number of samples used to evaluate the graphical influence of the NURBS influence object. The larger the number of samples, the closer the smooth skinning is to the shape of the affected object. Set a value between 1 and 100. The default value is 10.
Weight Locking
Specify that you want to avoid indirect changes to the weights of the affected objects, which is usually caused by weight normalization during weight drawing and editing (see Locking smooth skin weights). Maya keeps the weight as “Default Weight”. The default is off.
Default Weight
Specify the default holding weight when Weight Holding is on. The default value is 0.000.
Direct binding will cause this problem:
After binding, I noticed an unexpected effect. During the weight drawing process, the model produced an effect that seemed to have abdominal muscles when bending over. A surprise hahaha.
This week I discussed the current storyboard with Jay and Layne, and we are a bit slow on it. But we have clarified the basic issues. The main issues are the rhythm of the fight and the continuity of the shot.
This is our first version of the storyboard. It can be seen that the content of many shots is not clear enough, and there are still some controversies about the ending and some shots changes.
Environment:
I talked with evanna about the production details and style of the buildings. We still hope that we can refer to the style of Overwatch and make some buildings with clear shapes and simple structures.
This is a building I made.
To put it in a more straightforward way, it is to add geometric depressions or protrusions in the building and add some pipes. It is also noted that all geometrical turns have rounded corners.
Bend:This node lets you define a capture region around (or partially intersecting) a model, and apply deformations to the geometry inside the capture region.
Volumetric light:
When a light-shielding object is illuminated by a light source, the radioactivity of the light present around it leaks, which is called volume light. The lighting under this special effect gives people a visual sense of space compared to the lighting in previous games. Can provide more realistic and characteristic visual effects.
Volumetric light environment material:
Render:
Smoke&Flame&Explosion
Node:
Volume Rasterize Attributes: The Volume Rasterize Attributes SOP takes a cloud of points as input and creates VDBs for its float or vector attributes.
In Houdini, naming is very important. There are several names in making firework explosion special effects: Density, Temperature, Fuel, etc., which control different variables respectively.
Distance field:Creates a signed distance field (SDF). An SDF stores the distance to the surface in each voxel. (If the voxel is inside, the distance is negative.) After the distance field is calculated, the closest distance between any point in the field and all reference objects can be obtained.
Density field:Creates a density field. Voxels in the band on the surface store 1 and voxels outside store 0. The size of the filled band is controlled with the Interior Band Voxels. Turn on the Fill interior to create a solid VDB from an airtight surface instead of a narrow band.
Smokesolver
The smoke solver provides the basics of smoke simulation. If you just want to generate smoke, the smoke solver is useful since it is simpler and expert users can build their own extensions on it. However, the Pyro Solver is more flexible.
smokesolver_sparse
The Smoke Solver is able to perform the basic steps required for a smoke simulation. Pyro Solver (Sparse) extends the functionality of this solver by adding flame simulation along with extra shaping controls.
Pyrosolver_sparse
This node is an extension of the Smoke Solver (Sparse). It considers an extra simulation field (flame, which captures the presence of flames) and adds some extra shaping parameters to allow for more control over the emergent look.
Density:Smoke density
Temperature:It controls the speed of smoke diffusion, the hotter the faster
The linear workflow means that when the software calculates, all the data involved in the calculation are based on linearity. Almost all image processing software we use is based on linear calculations. That is, 1+1=2, 2*2=4, 128+128=256. The correct premise of this process is that all the materials, conditions, and lights before the calculation are linear data.
sRGB (standard Red Green Blue) is a color language protocol jointly developed by Microsoft imaging giants. Microsoft, together with Epson, HP, etc., provides a standard method to define colors, allowing various computer peripherals such as display, printing, and scanning to interact with Application software has a common language for color.
Gamma correction
It is to edit the gamma curve of the image to perform non-linear tone editing on the image to detect the dark part and the light part in the image signal, and increase the ratio of the two, thereby improving the image contrast effect. In the field of computer graphics, the conversion relationship curve between the output voltage of the screen and the corresponding brightness is used to be called the Gamma Curve.
In terms of the characteristics of the traditional CRT (Cathode Ray Tube) screen, the curve is usually a power function, Y=(X+e)γ, where Y is the brightness, X is the output voltage, e is the compensation coefficient, and the power The value (γ) is the gamma value. Changing the size of the power value (γ) can change the Gamma curve of the CRT. The typical Gamma value is 0.45, which makes the brightness of the CRT image appear linear. For display screens such as TV sets using CRTs, since the luminous grayscale of the input signal is not a linear function but an exponential function, it must be corrected.
ACES
Academy Color Coding System is an open color management and interchange system developed by the American Academy of Motion Picture Arts and Sciences (AMPAS) and industry partners.
the depth of color
In the field of computer graphics, color depth represents the number of bits used to store 1 pixel of color in a bitmap or video frame buffer. It is also called bit/pixel (bpp). The higher the color depth, the more colors are available.
The color depth is described by “n-bit color”. If the color depth is n bits, there are 2n color choices, and the number of bits used to store each pixel is n.
Light type
Mantra&Arnold
After the learning experience in this lesson, I think that in the rendering speed of simple scenes, the speed of Arnold is much higher than that of mantra, but in terms of experience, especially the processing of textures by Arnold, the experience is not very good, repeated rendering and rendering is not good The problem of displaying textures recurred during my operation. Of course, I spent a lot of time on a UV problem. Houdini can’t create plane UV automatically. I remembered: UV is important to Texture
Some people on the Internet provided a comparison of the two renderers in terms of caustics. Mantra is significantly better than Arnold in this respect. In general, the two renderers have their own advantages and disadvantages, but I tend to use Arnold and hope that Htoa can be updated.