This work aims to investigate the problem of 3D modeling using single free-hand sketches, which is one of the most natural ways we humans express ideas. Although sketch-based 3D modeling can drastically make the 3D modeling process more accessible, the sparsity and ambiguity of sketches bring significant challenges for creating high-fidelity 3D models that reflect the creators' ideas. In this work, we propose a view- and structural-aware deep learning approach, \textit{Deep3DSketch}, which tackles the ambiguity and fully uses sparse information of sketches, emphasizing the structural information. Specifically, we introduced random pose sampling on both 3D shapes and 2D silhouettes, and an adversarial training scheme with an effective progressive discriminator to facilitate learning of the shape structures. Extensive experiments demonstrated the effectiveness of our approach, which outperforms existing methods -- with state-of-the-art (SOTA) performance on both synthetic and real datasets.
翻译:本研究旨在探究利用单张自由手绘草图进行三维建模的问题——这是人类表达想法最自然的方式之一。尽管基于草图的三维建模能够显著降低建模门槛,但草图的稀疏性和模糊性给构建高保真且符合创作者意图的三维模型带来了巨大挑战。本文提出了一种视图与结构感知的深度学习方法Deep3DSketch,该方法通过强调结构信息来应对草图模糊性并充分利用稀疏信息。具体而言,我们在三维形状和二维轮廓上引入了随机姿态采样,并设计了带有高效渐进式判别器的对抗训练机制,以促进形状结构的学习。大量实验证明,本方法在合成数据集和真实数据集上均达到了当前最优(State-of-the-Art,SOTA)性能,显著优于现有方法。