This week I learned how to track object in 3Dequalizer. The task was to match the Ironman helmet to the head of a man looking at his cellphone on the street.I am familiar with the operation of 3DE through the study last week. The first steps are the same as before. First, you need to track the camera trajectory and select some clear points in the foreground, middle ground and background.
The difference is that you need to know the camera is fixed, so you need to change the positional camera constraint in the camera settings, and select the fix camera position constraint. In this way, the correct calculation result can be obtained.
Because the camera is at a fixed position and there is no obvious rotation, I did not need to choose a lot of tracking points to get good results.
The next step is to track the object. The task is to match the model of the ironman helmet to match the movement of the man’s head, so the tracking points should all be on the man’s face. Here we need to create a new point group and choose clear points on the man’s head. The principle of selection here is basically the same as the camera one.
After that, we imported the helmet model into 3DE. Then you need to match the tracked points with the model one by one. The function used here is Extract Vertex. This is the effect after matching.
Adjusting the position and rotation of the model so that the helmet can wrap the man’s head. It should be noted that the scale should not be changed here, because if the size of the model is changed, it will affect the next step. The size of the model should not be changed during the entire process.
Then it can be exported to Maya and nuke. In Maya, the original obj model is replaced with the fbx model to make animation.
Finally, import the rendered sequences into nuke to get this.
Afterwards, some incorrect parts need to be deducted by roto. Here I mainly show the final tracking result.