Accurate perception is critical for vehicle safety, with LiDAR as a key enabler in autonomous driving. To ensure robust performance across environments, sensor types, and weather conditions without costly re-annotation, domain generalization in LiDAR-based 3D semantic segmentation is essential. However, LiDAR annotations are often noisy due to sensor imperfections, occlusions, and human errors. Such noise degrades segmentation accuracy and is further amplified under domain shifts, threatening system reliability. While noisy-label learning is well-studied in images, its extension to 3D LiDAR segmentation under domain generalization remains largely unexplored, as the sparse and irregular structure of point clouds limits direct use of 2D methods. To address this gap, we introduce the novel task Domain Generalization for LiDAR Semantic Segmentation under Noisy Labels (DGLSS-NL) and establish the first benchmark by adapting three representative noisy-label learning strategies from image classification to 3D segmentation. However, we find that existing noisy-label learning approaches adapt poorly to LiDAR data. We therefore propose DuNe, a dual-view framework with strong and weak branches that enforce feature-level consistency and apply cross-entropy loss based on confidence-aware filtering of predictions. Our approach shows state-of-the-art performance by achieving 56.86% mIoU on SemanticKITTI, 42.28% on nuScenes, and 52.58% on SemanticPOSS under 10% symmetric label noise, with an overall Arithmetic Mean (AM) of 49.57% and Harmonic Mean (HM) of 48.50%, thereby demonstrating robust domain generalization in DGLSS-NL tasks. The code is available on our project page.
翻译:精确感知对车辆安全至关重要,而LiDAR是实现自动驾驶的关键使能技术。为确保在不同环境、传感器类型和天气条件下的鲁棒性能,同时避免昂贵的重新标注成本,基于LiDAR的三维语义分割中的域泛化研究至关重要。然而,由于传感器缺陷、遮挡和人为误差等因素,LiDAR标注通常存在噪声。此类噪声会降低分割精度,并在域偏移下进一步放大,从而威胁系统可靠性。尽管带噪标签学习在图像领域已有深入研究,但其在域泛化场景下向三维LiDAR分割的扩展仍基本处于空白,因为点云的稀疏不规则结构限制了二维方法的直接应用。为填补这一空白,我们提出了带噪标签的LiDAR语义分割域泛化这一新任务,并通过将三种代表性的图像分类带噪标签学习策略适配至三维分割,建立了首个基准测试框架。然而,我们发现现有带噪标签学习方法对LiDAR数据的适应性较差。为此,我们提出DuNe——一种包含强分支与弱分支的双视角框架,该框架通过强制特征级一致性,并基于置信度感知的预测过滤机制应用交叉熵损失。在10%对称标签噪声条件下,我们的方法在SemanticKITTI上达到56.86% mIoU,在nuScenes上达到42.28%,在SemanticPOSS上达到52.58%,其算术平均值为49.57%,调和平均值为48.50%,实现了最先进的性能,从而在DGLSS-NL任务中展现出鲁棒的域泛化能力。相关代码已在项目页面公开。