Dynamic scene understanding remains a persistent challenge in robotic applications. Early dynamic mapping methods focused on mitigating the negative influence of short-term dynamic objects on camera motion estimation by masking or tracking specific categories, which often fall short in adapting to long-term scene changes. Recent efforts address object association in long-term dynamic environments using neural networks trained on synthetic datasets, but they still rely on predefined object shapes and categories. Other methods incorporate visual, geometric, or semantic heuristics for the association but often lack robustness. In this work, we introduce BYE, a class-agnostic, per-scene point cloud encoder that removes the need for predefined categories, shape priors, or extensive association datasets. Trained on only a single sequence of exploration data, BYE can efficiently perform object association in dynamically changing scenes. We further propose an ensembling scheme combining the semantic strengths of Vision Language Models (VLMs) with the scene-specific expertise of BYE, achieving a 7% improvement and a 95% success rate in object association tasks. Code and dataset are available at https://byencoder.github.io.
翻译:动态场景理解在机器人应用中始终是一项持续挑战。早期的动态建图方法侧重于通过掩蔽或跟踪特定类别来减轻短期动态物体对相机运动估计的负面影响,但这些方法往往难以适应长期的场景变化。近期研究尝试利用在合成数据集上训练的神经网络来解决长期动态环境中的物体关联问题,但它们仍然依赖于预定义的物体形状和类别。其他方法则结合了视觉、几何或语义启发式规则进行关联,但通常缺乏鲁棒性。本文提出BYE,一种与类别无关、针对单场景的点云编码器,它无需预定义类别、形状先验或大量的关联数据集。仅需在单次探索数据序列上进行训练,BYE即可在动态变化的场景中高效执行物体关联。我们进一步提出一种集成方案,将视觉语言模型(VLMs)的语义优势与BYE的场景特定专长相结合,在物体关联任务中实现了7%的性能提升和95%的成功率。代码和数据集可在 https://byencoder.github.io 获取。