This paper details a planner for visual exploration and search without prior map information. We leverage classical frontier based planning with both LiDAR and visual sensing and augment it a pixel-wise environment perception module that contextually labels points in the surroundings from wide Field of View 2D LiDAR scans. The goal of the perception module is to distinguish between `map' and `non-map' points in order to provide an informed prior on which to plan next best viewpoints. The robust map-free scan classifier used to label pixels in the robot's surroundings is trained from expert data collected using a simple cart platform equipped with a map-based classifier. We propose a novel utility function that accounts for traditional metrics like information gain and travel costs in addition to the contextual data found from the classifier. The resulting viewpoints encourage the robot to explore points unlikely to be permanent in the environment, leading the robot to locate objects of interest faster than several existing baseline algorithms. It is further validated in real-world experiments with single and multiple search objects with a Spot robot in two unseen environments. Videos of experiments, implementation details and open source code can be found at https://sites.google.com/view/lives-2024/home.
翻译:本文详细介绍了一种无需先验地图信息的视觉探索与搜索规划器。我们融合激光雷达与视觉感知的经典前沿规划方法,并增强了一个像素级环境感知模块,该模块能够对广角二维激光雷达扫描中的周围点云进行上下文标注。感知模块的目标在于区分"地图"与"非地图"点,从而为规划下一最佳视点提供信息先验。用于标注机器人周围像素的鲁棒无地图扫描分类器,通过配备基于地图分类器的简易推车平台采集专家数据进行训练。我们提出了一种新颖的效用函数,该函数不仅考虑信息增益与行进成本等传统指标,还整合了分类器提供的上下文数据。由此生成的视点引导机器人优先探索环境中非永久性特征点,使得机器人能够比现有多种基线算法更快定位目标物体。该算法进一步在真实环境中通过Spot机器人在两个未知场景中进行了单目标与多目标搜索实验验证。实验视频、实现细节及开源代码详见 https://sites.google.com/view/lives-2024/home。