Compositional scene reconstruction seeks to create object-centric representations rather than holistic scenes from real-world videos, which is natively applicable for simulation and interaction. Conventional compositional reconstruction approaches primarily emphasize on visual appearance and show limited generalization ability to real-world scenarios. In this paper, we propose SimRecon, a framework that realizes a "Perception-Generation-Simulation" pipeline towards cluttered scene reconstruction, which first conducts scene-level semantic reconstruction from video input, then performs single-object generation, and finally assembles these assets in the simulator. However, naively combining these three stages leads to visual infidelity of generated assets and physical implausibility of the final scene, a problem particularly severe for complex scenes. Thus, we further propose two bridging modules between the three stages to address this problem. To be specific, for the transition from Perception to Generation, critical for visual fidelity, we introduce Active Viewpoint Optimization, which actively searches in 3D space to acquire optimal projected images as conditions for single-object completion. Moreover, for the transition from Generation to Simulation, essential for physical plausibility, we propose a Scene Graph Synthesizer, which guides the construction from scratch in 3D simulators, mirroring the native, constructive principle of the real world. Extensive experiments on the ScanNet dataset validate our method's superior performance over previous state-of-the-art approaches.
翻译:组合式场景重建旨在从真实世界视频中创建以物体为中心的表示,而非整体场景,这天然适用于仿真与交互。传统组合式重建方法主要侧重于视觉外观,对真实场景的泛化能力有限。本文提出SimRecon框架,实现面向杂乱场景重建的“感知-生成-仿真”流程:首先从视频输入进行场景级语义重建,随后执行单物体生成,最终在仿真器中组装这些资产。然而,简单组合这三个阶段会导致生成资产的视觉保真度不足及最终场景的物理合理性缺失,这一问题在复杂场景中尤为严重。为此,我们进一步提出连接三个阶段的两个桥接模块以解决该问题。具体而言,在感知到生成的关键过渡阶段(对视觉保真度至关重要),我们引入主动视角优化方法,通过在三维空间中主动搜索获取最优投影图像作为单物体补全的条件。此外,在生成到仿真的过渡阶段(对物理合理性至关重要),我们提出场景图合成器,指导在三维仿真器中从零开始构建场景,以镜像真实世界固有的构造性原理。在ScanNet数据集上的大量实验验证了本方法相较于先前最先进方案的优越性能。