3D facial modeling with geometric wrinkles from images
Realistic 3D facial modeling and reconstruction have been increasingly used in many graphics, animation, and virtual reality applications. Currently many existing face models are not able to present rich details while deforming, which means lack of wrinkles while face shows different expressions. Also, to create a realistic face model for an individual is also needs complex setup and sophisticated works from experienced artists. The goal of this dissertation is to achieve an end-to-end system to augment coarse-scale 3D face models, and to reconstruct realistic face from in-the-wild images. I propose an end-to-end method to automatically augment coarse-scale 3D faces with synthesized fine scale geometric wrinkles. I define the wrinkle as the displacement value along the vertex normal direction, and save it as displacement map. The distribution of wrinkles has some spatial characteristics, and deep convolutional neural network (DCNN) is pretty good at learning spacial information across image-format data. I labeled the wrinkle data with its identity and expression vectors. By formulating the wrinkle generation problem as a supervised generation task, I implicitly model the continuous space of face wrinkles via a compact generative model, such that plausible face wrinkles can be generated through effective sampling and interpolation in the space. Then I introduce a complete pipeline to transfer the synthesized wrinkles between faces with different shapes and topologies. The method can augment an exist 3D face model with more fine-scale details, but to create a realistic human face model is not yet solved. Properly modeling complex lighting effects in reality, including specular lighting, shadows, and occlusions, from a single in-the-wild face image is still considered as a widely open research challenge. To reconstruct an realistic face model from an unconstrained image, I propose a CNN based framework to regress the face model from a single image in the wild. I designed novel hybrid loss functions to disentangle face shape identities, expressions, poses, albedos, and lighting. The outputted face model includes dense 3D shape, head pose, expression, diffuse albedo, specular albedo, and the corresponding lighting conditions.