Fast-flying aerial robots promise rapid inspection under limited battery constraints, with direct applications in infrastructure inspection, terrain exploration, and search and rescue. However, high speeds lead to severe motion blur in images and induce significant drift and noise in pose estimates, making dense 3D reconstruction with Neural Radiance Fields (NeRFs) particularly challenging due to their high sensitivity to such degradations. In this work, we present a unified framework that leverages asynchronous event streams alongside motion-blurred frames to reconstruct high-fidelity radiance fields from agile drone flights. By embedding event-image fusion into NeRF optimization and jointly refining event-based visual-inertial odometry priors using both event and frame modalities, our method recovers sharp radiance fields and accurate camera trajectories without ground-truth supervision. We validate our approach on both synthetic data and real-world sequences captured by a fast-flying drone. Despite highly dynamic drone flights, where RGB frames are severely degraded by motion blur and pose priors become unreliable, our method reconstructs high-fidelity radiance fields and preserves fine scene details, delivering a performance gain of over 50% on real-world data compared to state-of-the-art methods.
翻译:快速飞行的空中机器人有望在有限电池约束下实现快速巡检,在基础设施检查、地形勘探和搜救等领域具有直接应用价值。然而,高速飞行会导致图像出现严重运动模糊,并引起位姿估计的显著漂移和噪声,这使得基于神经辐射场(NeRFs)的稠密三维重建变得尤为困难,因为NeRF对此类退化现象高度敏感。本研究提出一个统一框架,利用异步事件流与运动模糊帧相结合,从敏捷无人机飞行中重建高保真辐射场。通过将事件-图像融合嵌入NeRF优化过程,并联合利用事件和帧模态共同优化基于事件的视觉-惯性里程计先验,我们的方法无需地面真值监督即可恢复锐利辐射场和精确相机轨迹。我们在合成数据与快速飞行无人机采集的真实场景序列上验证了所提方法。即使在无人机高速动态飞行导致RGB帧严重运动模糊、位姿先验不可靠的情况下,我们的方法仍能重建高保真辐射场并保留精细场景细节,在真实数据上相比现有最优方法实现超过50%的性能提升。