Human models play a crucial role in human-robot interaction (HRI), enabling robots to consider the impact of their actions on people and plan their behavior accordingly. However, crafting good human models is challenging; capturing context-dependent human behavior requires significant prior knowledge and/or large amounts of interaction data, both of which are difficult to obtain. In this work, we explore the potential of large-language models (LLMs) -- which have consumed vast amounts of human-generated text data -- to act as zero-shot human models for HRI. Our experiments on three social datasets yield promising results; the LLMs are able to achieve performance comparable to purpose-built models. That said, we also discuss current limitations, such as sensitivity to prompts and spatial/numerical reasoning mishaps. Based on our findings, we demonstrate how LLM-based human models can be integrated into a social robot's planning process and applied in HRI scenarios. Specifically, we present one case study on a simulated trust-based table-clearing task and replicate past results that relied on custom models. Next, we conduct a new robot utensil-passing experiment (n = 65) where preliminary results show that planning with a LLM-based human model can achieve gains over a basic myopic plan. In summary, our results show that LLMs offer a promising (but incomplete) approach to human modeling for HRI.
翻译:人类模型在人机交互中扮演着关键角色,它使机器人能够评估自身行为对人类的影响,并据此规划其行为。然而,构建优质的人类模型具有挑战性:捕捉上下文相关的人类行为需要大量先验知识和/或海量交互数据,这两者均难以获取。本研究探索了大型语言模型——这些模型已消化海量人类生成的文本数据——作为人机交互零样本人类模型的潜力。我们在三个社交数据集上的实验取得了积极成果;LLMs能够达到与专用模型相当的性能。尽管如此,我们也讨论了当前存在的局限性,例如对提示词的敏感性以及空间/数值推理失误。基于研究发现,我们展示了如何将基于LLM的人类模型整合到社交机器人的规划流程中,并应用于人机交互场景。具体而言,我们首先呈现了一个基于模拟的信任型桌面清理任务案例研究,复现了以往依赖定制模型取得的实验结果。随后,我们开展了一项新的机器人餐具传递实验(样本量n = 65),初步结果表明:采用基于LLM的人类模型进行规划,其效果优于基础短视规划方案。总之,我们的研究证明LLMs为构建人机交互人类模型提供了一条前景广阔(但仍不完善)的技术路径。