Autonomous vehicles performing navigation tasks in complex environments face significant challenges due to uncertainty in state estimation. In many scenarios, such as stealth operations or resource-constrained settings, accessing high-precision localization comes at a significant cost, forcing robots to rely primarily on less precise state estimates. Our key observation is that different tasks require varying levels of precision in different regions: a robot navigating a crowded space might need precise localization near obstacles but can operate effectively with less precision elsewhere. In this paper, we present a planning method for integrating task-specific uncertainty requirements directly into navigation policies. We introduce Task-Specific Uncertainty Maps (TSUMs), which abstract the acceptable levels of state estimation uncertainty across different regions. TSUMs align task requirements and environmental features using a shared representation space, generated via a domain-adapted encoder. Using TSUMs, we propose Generalized Uncertainty Integration for Decision-Making and Execution (GUIDE), a policy conditioning framework that incorporates these uncertainty requirements into robot decision-making. We find that TSUMs provide an effective way to abstract task-specific uncertainty requirements, and conditioning policies on TSUMs enables the robot to reason about the context-dependent value of certainty and adapt its behavior accordingly. We show how integrating GUIDE into reinforcement learning frameworks allows the agent to learn navigation policies that effectively balance task completion and uncertainty management without explicit reward engineering. We evaluate GUIDE on various real-world robotic navigation tasks and find that it demonstrates significant improvement in task completion rates compared to baseline methods that do not explicitly consider task-specific uncertainty.
翻译:在复杂环境中执行导航任务的自主车辆,由于状态估计的不确定性而面临重大挑战。在许多场景中,例如隐蔽操作或资源受限环境,获取高精度定位需要付出显著代价,迫使机器人主要依赖精度较低的状态估计。我们的核心观察是:不同任务在不同区域需要不同精度的定位——在拥挤空间中导航的机器人可能需要在障碍物附近进行精确的定位,而在其他区域则能以较低精度有效运行。本文提出一种将任务特定不确定性要求直接整合到导航策略中的规划方法。我们引入了任务特定不确定性地图(TSUMs),该地图通过共享表征空间(由领域自适应编码器生成)将任务需求与环境特征对齐,抽象出不同区域可接受的状态估计不确定性水平。基于TSUMs,我们提出了用于决策与执行的广义不确定性集成框架(GUIDE),这是一个将不确定性要求纳入机器人决策过程的策略调节框架。我们发现:TSUMs为抽象任务特定不确定性要求提供了有效途径,而基于TSUMs的策略调节能使机器人推理确定性的情境依赖价值,并相应调整其行为。我们展示了将GUIDE集成到强化学习框架中,可使智能体学习到有效平衡任务完成与不确定性管理的导航策略,而无需显式的奖励工程。通过在多种现实世界机器人导航任务上评估GUIDE,我们发现相较于未显式考虑任务特定不确定性的基线方法,该方法在任务完成率方面展现出显著提升。