Autonomous mobile robots deployed in urban environments must be context-aware, i.e., able to distinguish between different semantic entities, and robust to occlusions. Current approaches like semantic scene completion (SSC) require pre-enumerating the set of classes and costly human annotations, while representation learning methods relax these assumptions but are not robust to occlusions and learn representations tailored towards auxiliary tasks. To address these limitations, we propose LSMap, a method that lifts masks from visual foundation models to predict a continuous, open-set semantic and elevation-aware representation in bird's eye view (BEV) for the entire scene, including regions underneath dynamic entities and in occluded areas. Our model only requires a single RGBD image, does not require human labels, and operates in real time. We quantitatively demonstrate our approach outperforms existing models trained from scratch on semantic and elevation scene completion tasks with finetuning. Furthermore, we show that our pre-trained representation outperforms existing visual foundation models at unsupervised semantic scene completion. We evaluate our approach using CODa, a large-scale, real-world urban robot dataset. Supplementary visualizations, code, data, and pre-trained models, will be publicly available soon.
翻译:部署在城市环境中的自主移动机器人必须具备上下文感知能力,即能够区分不同的语义实体,并对遮挡具有鲁棒性。当前方法如语义场景补全(SSC)需要预先枚举类别集并依赖昂贵的人工标注,而表示学习方法虽然放宽了这些假设,但对遮挡不鲁棒且学习的表示偏向于辅助任务。为解决这些局限性,我们提出LSMap方法,该方法从视觉基础模型中提取掩码,以预测整个场景(包括动态实体下方区域和遮挡区域)在鸟瞰图(BEV)中的连续、开放集语义及高程感知表示。我们的模型仅需单张RGBD图像,无需人工标注,并可实时运行。我们通过定量实验证明,经过微调后,本方法在语义和高程场景补全任务上优于从头开始训练的现有模型。此外,我们展示了预训练的表示在无监督语义场景补全任务上优于现有视觉基础模型。我们使用大规模真实世界城市机器人数据集CODa评估了本方法。补充可视化材料、代码、数据及预训练模型将很快公开提供。