Nighttime camera-based depth estimation is a highly challenging task, especially for autonomous driving applications, where accurate depth perception is essential for ensuring safe navigation. We aim to improve the reliability of perception systems at night time, where models trained on daytime data often fail in the absence of precise but costly LiDAR sensors. In this work, we introduce Light Enhanced Depth (LED), a novel cost-effective approach that significantly improves depth estimation in low-light environments by harnessing a pattern projected by high definition headlights available in modern vehicles. LED leads to significant performance boosts across multiple depth-estimation architectures (encoder-decoder, Adabins, DepthFormer) both on synthetic and real datasets. Furthermore, increased performances beyond illuminated areas reveal a holistic enhancement in scene understanding. Finally, we release the Nighttime Synthetic Drive Dataset, a new synthetic and photo-realistic nighttime dataset, which comprises 49,990 comprehensively annotated images.
翻译:基于摄像头的夜间深度估计是一项极具挑战性的任务,对于自动驾驶应用尤其如此,其中精确的深度感知对于确保安全导航至关重要。我们的目标是提高夜间感知系统的可靠性,因为在缺乏精确但昂贵的激光雷达传感器时,基于白天数据训练的模型常常失效。在本工作中,我们提出了光照增强深度估计(LED),这是一种新颖且经济高效的方法,它通过利用现代车辆配备的高清头灯所投射的光照模式,显著改善了低光照环境下的深度估计性能。LED 在多种深度估计架构(编码器-解码器、Adabins、DepthFormer)上,无论是在合成数据集还是真实数据集上,均带来了显著的性能提升。此外,性能提升不仅限于被照亮的区域,这揭示了场景理解能力的整体增强。最后,我们发布了夜间合成驾驶数据集,这是一个全新的合成且具有照片级真实感的夜间数据集,包含 49,990 张带有全面标注的图像。