Recent advances in diffusion models have significantly improved 3D generation, enabling the use of assets generated from an image for embodied AI simulations. However, the one-to-many nature of the image-to-3D problem limits their use due to inconsistent content and quality across views. Previous models optimize a 3D model by sampling views from a view-conditioned diffusion prior, but diffusion models cannot guarantee view consistency. Instead, we present ConsistentDreamer, where we first generate a set of fixed multi-view prior images and sample random views between them with another diffusion model through a score distillation sampling (SDS) loss. Thereby, we limit the discrepancies between the views guided by the SDS loss and ensure a consistent rough shape. In each iteration, we also use our generated multi-view prior images for fine-detail reconstruction. To balance between the rough shape and the fine-detail optimizations, we introduce dynamic task-dependent weights based on homoscedastic uncertainty, updated automatically in each iteration. Additionally, we employ opacity, depth distortion, and normal alignment losses to refine the surface for mesh extraction. Our method ensures better view consistency and visual quality compared to the state-of-the-art.
翻译:近年来,扩散模型的进展显著提升了三维生成能力,使得从单张图像生成的资产能够用于具身人工智能仿真。然而,图像到三维问题的一对多特性导致不同视角间内容与质量不一致,限制了其应用。先前的方法通过从视角条件扩散先验中采样视角来优化三维模型,但扩散模型无法保证视角一致性。为此,我们提出ConsistentDreamer:首先生成一组固定的多视角先验图像,并通过另一扩散模型结合分数蒸馏采样损失在视角间进行随机采样。由此,我们限制了SDS损失引导下各视角间的差异,确保了粗糙形状的一致性。在每次迭代中,我们还利用生成的多视角先验图像进行精细细节重建。为平衡粗糙形状优化与精细细节优化,我们引入了基于同方差不确定性的动态任务依赖权重,该权重在每次迭代中自动更新。此外,我们采用不透明度、深度畸变与法向对齐损失来优化表面以提取网格。与现有技术相比,本方法在视角一致性与视觉质量方面均表现出更优性能。