Offboard perception aims to automatically generate high-quality 3D labels for autonomous driving (AD) scenes. Existing offboard methods focus on 3D object detection with closed-set taxonomy and fail to match human-level recognition capability on the rapidly evolving perception tasks. Due to heavy reliance on human labels and the prevalence of data imbalance and sparsity, a unified framework for offboard auto-labeling various elements in AD scenes that meets the distinct needs of perception tasks is not being fully explored. In this paper, we propose a novel multi-modal Zero-shot Offboard Panoptic Perception (ZOPP) framework for autonomous driving scenes. ZOPP integrates the powerful zero-shot recognition capabilities of vision foundation models and 3D representations derived from point clouds. To the best of our knowledge, ZOPP represents a pioneering effort in the domain of multi-modal panoptic perception and auto labeling for autonomous driving scenes. We conduct comprehensive empirical studies and evaluations on Waymo open dataset to validate the proposed ZOPP on various perception tasks. To further explore the usability and extensibility of our proposed ZOPP, we also conduct experiments in downstream applications. The results further demonstrate the great potential of our ZOPP for real-world scenarios.
翻译:离线感知旨在为自动驾驶场景自动生成高质量的三维标注。现有离线方法主要关注闭集分类体系下的三维目标检测,难以应对快速演进的感知任务中达到人类水平的识别能力。由于对人工标注的高度依赖以及普遍存在的数据不平衡与稀疏性问题,目前尚未充分探索能够满足不同感知任务独特需求、统一标注自动驾驶场景中各类元素的离线自动标注框架。本文提出一种新颖的多模态零样本离线全景感知框架,用于自动驾驶场景。ZOPP融合了视觉基础模型强大的零样本识别能力与源自点云的三维表征。据我们所知,ZOPP是自动驾驶场景多模态全景感知与自动标注领域的开创性工作。我们在Waymo开放数据集上进行了全面的实证研究与评估,以验证所提ZOPP在各种感知任务上的有效性。为进一步探索ZOPP的实用性与可扩展性,我们还开展了下游应用实验。结果进一步证明了ZOPP在现实场景中的巨大潜力。