In autonomous exploration tasks, robots are required to explore and map unknown environments while efficiently planning in dynamic and uncertain conditions. Given the significant variability of environments, human operators often have specific preference requirements for exploration, such as prioritizing certain areas or optimizing for different aspects of efficiency. However, existing methods struggle to accommodate these human preferences adaptively, often requiring extensive parameter tuning or network retraining. With the recent advancements in Large Language Models (LLMs), which have been widely applied to text-based planning and complex reasoning, their potential for enhancing autonomous exploration is becoming increasingly promising. Motivated by this, we propose an LLM-based human-preferred exploration framework that seamlessly integrates a mobile robot system with LLMs. By leveraging the reasoning and adaptability of LLMs, our approach enables intuitive and flexible preference control through natural language while maintaining a task success rate comparable to state-of-the-art traditional methods. Experimental results demonstrate that our framework effectively bridges the gap between human intent and policy preference in autonomous exploration, offering a more user-friendly and adaptable solution for real-world robotic applications.
翻译:在自主探索任务中,机器人需要在动态和不确定条件下高效规划,同时探索并绘制未知环境地图。鉴于环境的显著可变性,人类操作者通常对探索过程有特定的偏好要求,例如优先探索某些区域或针对不同效率维度进行优化。然而,现有方法难以自适应地适应这些人类偏好,往往需要大量参数调整或网络重新训练。随着大语言模型(LLMs)的最新进展——其已广泛应用于基于文本的规划与复杂推理——其在增强自主探索方面的潜力日益凸显。受此启发,我们提出了一种基于LLM的人类偏好探索框架,将移动机器人系统与LLMs无缝集成。通过利用LLMs的推理与适应能力,我们的方法能够通过自然语言实现直观且灵活的偏好控制,同时保持与最先进传统方法相当的任务成功率。实验结果表明,该框架有效弥合了自主探索中人类意图与策略偏好之间的差距,为现实世界机器人应用提供了更友好、适应性更强的解决方案。