Modern 3D generation methods can rapidly create shapes from sparse or single views, but their outputs often lack geometric detail due to computational constraints. We present DetailGen3D, a generative approach specifically designed to enhance these generated 3D shapes. Our key insight is to model the coarse-to-fine transformation directly through data-dependent flows in latent space, avoiding the computational overhead of large-scale 3D generative models. We introduce a token matching strategy that ensures accurate spatial correspondence during refinement, enabling local detail synthesis while preserving global structure. By carefully designing our training data to match the characteristics of synthesized coarse shapes, our method can effectively enhance shapes produced by various 3D generation and reconstruction approaches, from single-view to sparse multi-view inputs. Extensive experiments demonstrate that DetailGen3D achieves high-fidelity geometric detail synthesis while maintaining efficiency in training.
翻译:现代三维生成方法能够从稀疏或单视图快速创建形状,但由于计算限制,其输出往往缺乏几何细节。我们提出了DetailGen3D,一种专门设计用于增强这些生成三维形状的生成式方法。我们的核心洞见是通过隐空间中的数据依赖流直接建模从粗糙到精细的变换,从而避免大规模三维生成模型的计算开销。我们引入了一种令牌匹配策略,确保在细化过程中实现精确的空间对应,从而在保持全局结构的同时实现局部细节合成。通过精心设计训练数据以匹配合成粗糙形状的特征,我们的方法能够有效增强由各种三维生成与重建方法(从单视图到稀疏多视图输入)生成的形状。大量实验表明,DetailGen3D在保持训练效率的同时,实现了高保真度的几何细节合成。