Underwater imaging is fundamentally challenging due to wavelength-dependent light attenuation, strong scattering from suspended particles, turbidity-induced blur, and non-uniform illumination. These effects impair standard cameras and make ground-truth motion nearly impossible to obtain. On the other hand, event cameras offer microsecond resolution and high dynamic range. Nonetheless, progress on investigating event cameras for underwater environments has been limited due to the lack of datasets that pair realistic underwater optics with accurate optical flow. To address this problem, we introduce the first synthetic underwater benchmark dataset for event-based optical flow derived from physically-based ray-traced RGBD sequences. Using a modern video-to-event pipeline applied to rendered underwater videos, we produce realistic event data streams with dense ground-truth flow, depth, and camera motion. Moreover, we benchmark state-of-the-art learning-based and model-based optical flow prediction methods to understand how underwater light transport affects event formation and motion estimation accuracy. Our dataset establishes a new baseline for future development and evaluation of underwater event-based perception algorithms. The source code and dataset for this project are publicly available at https://robotic-vision-lab.github.io/ueof.
翻译:水下成像因波长依赖性光衰减、悬浮颗粒的强烈散射、浑浊引起的模糊以及非均匀照明而具有根本性挑战。这些效应会损害标准相机的性能,并使获取真实运动信息几乎不可能。另一方面,事件相机提供微秒级分辨率和高动态范围。然而,由于缺乏将真实水下光学特性与精确光流配对的数据库,事件相机在水下环境中的研究进展一直受限。为解决此问题,我们引入了首个基于物理光线追踪RGBD序列生成的合成水下事件光流基准数据集。通过将现代视频到事件转换流程应用于渲染的水下视频,我们生成了具有密集真实光流、深度和相机运动的逼真事件数据流。此外,我们对最先进的基于学习和基于模型的光流预测方法进行了基准测试,以理解水下光传输如何影响事件形成和运动估计精度。我们的数据集为未来水下事件感知算法的开发与评估建立了新基准。本项目的源代码和数据集已在 https://robotic-vision-lab.github.io/ueof 公开提供。