Whenever humans and robots work together, it is essential that unexpected robot behavior can be explained to the user. Especially in applications such as shared control the user and the robot must share the same model of the objects in the world, and the actions that can be performed on these objects. In this paper, we achieve this with a so-called model reconciliation framework. We leverage a Large Language Model to predict and explain the difference between the robot's and the human's mental models, without the need of a formal mental model of the user. Furthermore, our framework aims to solve the model divergence after the explanation by allowing the human to correct the robot. We provide an implementation in an assistive robotics domain, where we conduct a set of experiments with a real wheelchair-based mobile manipulator and its digital twin.
翻译:当人类与机器人协同工作时,必须能够向用户解释机器人出现的意外行为。尤其在共享控制等应用中,用户与机器人必须对世界中的物体及可对这些物体执行的操作持有相同的模型。本文通过一种称为模型调和的框架实现这一目标。我们利用大型语言模型来预测并解释机器人与人类心智模型之间的差异,而无需构建用户的形式化心智模型。此外,我们的框架旨在通过允许人类修正机器人,在解释后解决模型分歧问题。我们在辅助机器人领域提供了该框架的实现,并基于真实轮椅式移动机械臂及其数字孪生系统开展了一系列实验。