Object Oriented 3D Modeling from a Single View Sketch

Date

2021-08

Journal Title

Journal ISSN

Volume Title

Publisher

Abstract

A variety of research has been conducted on sketch-based modeling over the past two decades. To bridge the gap between 2D sketches and 3D models, previous work used multi-view 2D input or contours with semantic meaning to infer the depth information used to create 3D models. However, those techniques required careful alignment input or complex interaction. The goal of this dissertation is to reduce the complexity requirement of sketch-based modeling and increase the modeling results including surface details. With the proposed techniques in this dissertation, two single-view sketch-based modeling frameworks are introduced. First, a framework is proposed to build a novel space that jointly embeds both 2D occluding contours and 3D shapes via a variational autoencoder (VAE) and a volumetric autoencoder. Using a dataset of 3D shapes, their occluding contours are extracted via projections from random views, and the occluding contours are used to train the VAE. Then, the obtained continuous embedding space, where each point is a latent vector that represents an occluding contour, can be used to measure the similarity between occluding contours. After that, the volumetric autoencoder is trained to first map 3D shapes onto the embedding space through a supervised learning process and then decode the merged latent vectors of three occluding contours (from three different views) of a 3D shape to its 3D voxel representation. To ensure the expansibility of the embedding space and the usefulness of the output voxels, the 3D modeling ability is enhanced for the categories not in the training dataset of 3D shapes by adding more contours extracted from web images to the embedding space and employing both symmetry and assembly-based refinements to improve the quality of the 3D modeling results. To increase the generation of surface details, a novel, object-oriented approach to model 3D objects with geometric details based on a single-view sketch input is proposed. Specifically, a novel differentiable sketch render is introduced to learn the geometric feature relation between normal maps and 2D strokes. Then, on top of the differentiable sketch render, an end-to-end framework is presented to generate 3D models with plausible geometric details based on a single-view sketch, and two novel losses based on the silhouette-based confidence maps and the regression similarities are introduced for a better convergence. This framework allows back-propagation of the gradients between the rendered sketch and the input sketch and allows the learnable weights to be updated to enhance the geometric details of the predicted 3D object.

Description

Keywords

Sketch-based Modeling, Variational Autoencoder, Differentiiable Renderer,

Citation

Portions of this document appear in: Aobo Jin, Qiang Fu, and Zhigang Deng. 2020. Contour-based 3D Modeling through Joint Embedding of Shapes and Contours. In Symposium on Interactive 3D Graphics and Games (I3D '20). Association for Computing Machinery, New York, NY, USA, Article 9, 1–10. DOI:https://doi.org/10.1145/3384382.3384518