Tags:Anthropometric points, Articulation and Moving parts
Abstract:
We introduce an efficient and practical integrated system for human body model personalization with articulation. We start with a 3D personalized model of the individual in a standard pose obtained using a 3D scanner or using body model reconstruction method based on canonical images of the individual. As the person moves, the model is updated to accommodate the new articulations captured in an articulation video. The personalized model is segmented into different parts using anthropometric control points on the boundary silhouette of the frontal projection of the 3D model. The control points are endpoints of the segments in 2D, and the segments are projections of corresponding regions of independently moving parts in 3D. These joint points can either be manually selected or predicted with a pre-trained point model such as active shape model (ASM) or using convolutional neural network (CNN). The evolution of the model through the articulation process is captured in a video clip with N frames. The update consists of finding a set of 3D transformations that are applied to the 3D model on its parts so that the projections of the 3D model ‘match’ those observed in the video sequence at corresponding frames. This is done by minimizing the error between the frontal projection body region points and the target points from the image for each independent moving part. Our articulation reconstructed method leads sub-resolution recovery errors.
3D Articulated Body Model Using Anthropometric Control Points and an Articulation Video