Recent advances in 3D shape generation have achieved impressive results, but most existing methods rely on clean, unoccluded, and well-segmented inputs. Such conditions are rarely met in real-world scenarios. We present ShapeR, a novel approach for conditional 3D object shape generation from casually captured sequences. Given an image sequence, we leverage off-the-shelf visual-inertial SLAM, 3D detection algorithms, and vision-language models to extract, for each object, a set of sparse SLAM points, posed multi-view images, and machine-generated captions. A rectified flow transformer trained to effectively condition on these modalities then generates high-fidelity metric 3D shapes. To ensure robustness to the challenges of casually captured data, we employ a range of techniques including on-the-fly compositional augmentations, a curriculum training scheme spanning object- and scene-level datasets, and strategies to handle background clutter. Additionally, we introduce a new evaluation benchmark comprising 178 in-the-wild objects across 7 real-world scenes with geometry annotations. Experiments show that ShapeR significantly outperforms existing approaches in this challenging setting, achieving an improvement of 2.7x in Chamfer distance compared to state of the art.
翻译:近年来,三维形状生成领域取得了显著进展,但现有方法大多依赖于干净、无遮挡且分割良好的输入数据。此类条件在现实场景中极少满足。本文提出ShapeR,一种从随意捕捉的序列中生成条件式三维物体形状的新方法。给定图像序列,我们利用现成的视觉-惯性SLAM、三维检测算法以及视觉-语言模型,为每个物体提取一组稀疏SLAM点、带姿态的多视角图像和机器生成的描述文本。随后,一个经过训练、能有效对这些模态进行条件建模的整流流Transformer生成高保真度的度量三维形状。为确保对随意捕捉数据所面临挑战的鲁棒性,我们采用了一系列技术,包括动态组合增强、涵盖物体级与场景级数据集的课程训练方案,以及处理背景杂波的策略。此外,我们引入了一个新的评估基准,包含7个真实场景中178个具有几何标注的野外物体。实验表明,在此具有挑战性的设定下,ShapeR显著优于现有方法,其倒角距离指标较当前最优方法提升了2.7倍。