Occupancy prediction aims to estimate the 3D spatial distribution of occupied regions along with their corresponding semantic labels. Existing vision-based methods perform well on daytime benchmarks but struggle in nighttime scenarios due to limited visibility and challenging lighting conditions. To address these challenges, we propose LIAR, a novel framework that learns illumination-affined representations. LIAR first introduces Selective Low-light Image Enhancement (SLLIE), which leverages the illumination priors from daytime scenes to adaptively determine whether a nighttime image is genuinely dark or sufficiently well-lit, enabling more targeted global enhancement. Building on the illumination maps generated by SLLIE, LIAR further incorporates two illumination-aware components: 2D Illumination-guided Sampling (2D-IGS) and 3D Illumination-driven Projection (3D-IDP), to respectively tackle local underexposure and overexposure. Specifically, 2D-IGS modulates feature sampling positions according to illumination maps, assigning larger offsets to darker regions and smaller ones to brighter regions, thereby alleviating feature degradation in underexposed areas. Subsequently,3D-IDP enhances semantic understanding in overexposed regions by constructing illumination intensity fields and supplying refined residual queries to the BEV context refinement process. Extensive experiments on both real and synthetic datasets demonstrate the superior performance of LIAR under challenging nighttime scenarios. The source code and pretrained models are available [here](https://github.com/yanzq95/LIAR).
翻译:占据预测旨在估计被占据区域的三维空间分布及其对应的语义标签。现有的基于视觉的方法在白天基准测试中表现良好,但在夜间场景中由于能见度有限和光照条件恶劣而表现不佳。为解决这些挑战,我们提出了LIAR,一种学习光照自适应表征的新框架。LIAR首先引入选择性低光照图像增强(SLLIE),该方法利用白天场景的光照先验自适应判断夜间图像是真实黑暗还是已充分照明,从而实现更具针对性的全局增强。基于SLLIE生成的光照图,LIAR进一步整合了两个光照感知组件:二维光照引导采样(2D-IGS)和三维光照驱动投影(3D-IDP),分别处理局部曝光不足和曝光过度问题。具体而言,2D-IGS根据光照图调整特征采样位置,为较暗区域分配较大偏移量,为较亮区域分配较小偏移量,从而缓解曝光不足区域的特征退化。随后,3D-IDP通过构建光照强度场并向BEV上下文优化过程提供精细化残差查询,增强曝光过度区域的语义理解。在真实和合成数据集上的大量实验表明,LIAR在具有挑战性的夜间场景下具有优越性能。源代码和预训练模型可在[此处](https://github.com/yanzq95/LIAR)获取。