Radar sensors are low cost, long-range, and weather-resilient. Therefore, they are widely used for driver assistance functions, and are expected to be crucial for the success of autonomous driving in the future. In many perception tasks only pre-processed radar point clouds are considered. In contrast, radar spectra are a raw form of radar measurements and contain more information than radar point clouds. However, radar spectra are rather difficult to interpret. In this work, we aim to explore the semantic information contained in spectra in the context of automated driving, thereby moving towards better interpretability of radar spectra. To this end, we create a radar spectra-language model, allowing us to query radar spectra measurements for the presence of scene elements using free text. We overcome the scarcity of radar spectra data by matching the embedding space of an existing vision-language model (VLM). Finally, we explore the benefit of the learned representation for scene parsing, and obtain improvements in free space segmentation and object detection merely by injecting the spectra embedding into a baseline model.
翻译:雷达传感器具有低成本、远距离和全天候工作的优势,因此被广泛应用于驾驶辅助功能,并有望在未来自动驾驶的成功中发挥关键作用。在许多感知任务中,通常仅考虑预处理后的雷达点云数据。相比之下,雷达频谱作为雷达测量的原始形式,比雷达点云包含更丰富的信息。然而,雷达频谱的解读较为困难。本研究旨在探索自动驾驶场景下频谱中蕴含的语义信息,从而提升雷达频谱的可解释性。为此,我们构建了一个雷达频谱-语言模型,允许通过自由文本查询雷达频谱测量中是否存在特定场景要素。通过将模型嵌入空间与现有视觉-语言模型(VLM)进行对齐,我们克服了雷达频谱数据稀缺的挑战。最后,我们探索了学习表征在场景解析中的优势,仅通过向基线模型注入频谱嵌入,就在自由空间分割和目标检测任务中取得了性能提升。