Robust 3D geometry estimation from videos is critical for applications such as autonomous navigation, SLAM, and 3D scene reconstruction. Recent methods like DUSt3R demonstrate that regressing dense pointmaps from image pairs enables accurate and efficient pose-free reconstruction. However, existing RGB-only approaches struggle under real-world conditions involving dynamic objects and extreme illumination, due to the inherent limitations of conventional cameras. In this paper, we propose EAG3R, a novel geometry estimation framework that augments pointmap-based reconstruction with asynchronous event streams. Built upon the MonST3R backbone, EAG3R introduces two key innovations: (1) a retinex-inspired image enhancement module and a lightweight event adapter with SNR-aware fusion mechanism that adaptively combines RGB and event features based on local reliability; and (2) a novel event-based photometric consistency loss that reinforces spatiotemporal coherence during global optimization. Our method enables robust geometry estimation in challenging dynamic low-light scenes without requiring retraining on night-time data. Extensive experiments demonstrate that EAG3R significantly outperforms state-of-the-art RGB-only baselines across monocular depth estimation, camera pose tracking, and dynamic reconstruction tasks.
翻译:从视频中鲁棒地估计三维几何对于自动驾驶导航、SLAM和三维场景重建等应用至关重要。DUSt3R等近期方法表明,从图像对回归密集点云图能够实现精确且无需姿态估计的高效重建。然而,由于传统相机固有的局限性,现有的纯RGB方法在涉及动态物体和极端光照的真实世界条件下表现不佳。本文提出EAG3R,一种新颖的几何估计框架,它通过异步事件流增强了基于点云图的重建。EAG3R基于MonST3R主干网络构建,引入了两项关键创新:(1) 一个受Retinex理论启发的图像增强模块,以及一个轻量级的事件适配器,该适配器配备信噪比感知融合机制,能够根据局部可靠性自适应地融合RGB与事件特征;(2) 一种新颖的基于事件的光度一致性损失,在全局优化过程中增强了时空一致性。我们的方法能够在具有挑战性的动态低光场景中进行鲁棒的几何估计,而无需在夜间数据上重新训练。大量实验表明,EAG3R在单目深度估计、相机姿态跟踪和动态重建任务上均显著优于最先进的纯RGB基线方法。