Language-conditioned local navigation requires a robot to infer a nearby traversable target location from its current observation and an open-vocabulary, relational instruction. Existing vision-language spatial grounding methods usually rely on vision-language models (VLMs) to reason in image space, producing 2D predictions tied to visible pixels. As a result, they struggle to infer target locations in occluded regions, typically caused by furniture or moving humans. To address this issue, we propose BEACON, which predicts an ego-centric Bird's-Eye View (BEV) affordance heatmap over a bounded local region including occluded areas. Given an instruction and surround-view RGB-D observations from four directions around the robot, BEACON predicts the BEV heatmap by injecting spatial cues into a VLM and fusing the VLM's output with depth-derived BEV features. Using an occlusion-aware dataset built in the Habitat simulator, we conduct detailed experimental analysis to validate both our BEV space formulation and the design choices of each module. Our method improves the accuracy averaged across geodesic thresholds by 22.74 percentage points over the state-of-the-art image-space baseline on the validation subset with occluded target locations. Our project page is: https://xin-yu-gao.github.io/beacon.
翻译:语言条件局部导航要求机器人根据当前观测和开放词汇的关系式指令,推断附近可通行的目标位置。现有的视觉语言空间定位方法通常依赖视觉语言模型在图像空间中进行推理,产生与可见像素绑定的二维预测。因此,这些方法难以推断被遮挡区域(通常由家具或移动行人造成)内的目标位置。为解决此问题,我们提出BEACON,该方法在包含遮挡区域的有限局部范围内,预测以自我为中心的鸟瞰图可供性热力图。给定指令和机器人周围四个方向的环视RGB-D观测,BEACON通过向视觉语言模型注入空间线索,并将其输出与深度衍生的鸟瞰图特征融合,从而预测鸟瞰图热力图。利用在Habitat模拟器中构建的遮挡感知数据集,我们进行了详细的实验分析,以验证我们的鸟瞰图空间构建方式以及各模块的设计选择。在目标位置被遮挡的验证子集上,我们的方法在平均跨测地线阈值准确率上较最先进的图像空间基线提升了22.74个百分点。项目页面为:https://xin-yu-gao.github.io/beacon。