This paper proposes ShapeShifter, a new 3D generative model that learns to synthesize shape variations based on a single reference model. While generative methods for 3D objects have recently attracted much attention, current techniques often lack geometric details and/or require long training times and large resources. Our approach remedies these issues by combining sparse voxel grids and point, normal, and color sampling within a multiscale neural architecture that can be trained efficiently and in parallel. We show that our resulting variations better capture the fine details of their original input and can handle more general types of surfaces than previous SDF-based methods. Moreover, we offer interactive generation of 3D shape variants, allowing more human control in the design loop if needed.
翻译:本文提出ShapeShifter,一种新的三维生成模型,能够基于单一参考模型学习合成形状变体。尽管三维物体的生成方法近来备受关注,但现有技术往往缺乏几何细节,且/或需要较长的训练时间与大量计算资源。我们的方法通过在多尺度神经架构中结合稀疏体素网格与点、法向量及颜色采样,有效解决了这些问题,该架构可实现高效并行训练。实验表明,所生成的变体能更好地捕捉原始输入的精细细节,且相比以往基于符号距离函数的方法,能处理更广泛的曲面类型。此外,我们实现了三维形状变体的交互式生成,可在设计循环中按需引入更灵活的人工控制。