Recent 3D large reconstruction models (LRMs) can generate high-quality 3D content in sub-seconds by integrating multi-view diffusion models with scalable multi-view reconstructors. Current works further leverage 3D Gaussian Splatting as 3D representation for improved visual quality and rendering efficiency. However, we observe that existing Gaussian reconstruction models often suffer from multi-view inconsistency and blurred textures. We attribute this to the compromise of multi-view information propagation in favor of adopting powerful yet computationally intensive architectures (\eg, Transformers). To address this issue, we introduce MVGamba, a general and lightweight Gaussian reconstruction model featuring a multi-view Gaussian reconstructor based on the RNN-like State Space Model (SSM). Our Gaussian reconstructor propagates causal context containing multi-view information for cross-view self-refinement while generating a long sequence of Gaussians for fine-detail modeling with linear complexity. With off-the-shelf multi-view diffusion models integrated, MVGamba unifies 3D generation tasks from a single image, sparse images, or text prompts. Extensive experiments demonstrate that MVGamba outperforms state-of-the-art baselines in all 3D content generation scenarios with approximately only $0.1\times$ of the model size.
翻译:近期,三维大型重建模型通过将多视角扩散模型与可扩展的多视角重建器相结合,能够在亚秒级时间内生成高质量的三维内容。当前研究进一步采用三维高斯泼溅作为三维表示,以提升视觉质量与渲染效率。然而,我们观察到现有高斯重建模型常存在多视角不一致性与纹理模糊问题。我们认为这是由于在采用强大但计算密集的架构(例如Transformer)时,牺牲了多视角信息传播所致。为解决此问题,我们提出了MVGamba——一种通用且轻量的高斯重建模型,其核心是基于类RNN状态空间模型的多视角高斯重建器。该高斯重建器在生成长序列高斯以实现细节建模的同时,传播包含多视角信息的因果上下文以实现跨视角自优化,且仅需线性复杂度。通过集成现成的多视角扩散模型,MVGamba能够统一处理从单张图像、稀疏图像或文本提示出发的三维生成任务。大量实验表明,MVGamba在所有三维内容生成场景中均优于现有先进基线模型,而其模型大小仅约为基线的$0.1\times$。