When automating plan generation for a real-world sequential decision problem, the goal is often not to replace the human planner, but to facilitate an iterative reasoning and elicitation process, where the human's role is to guide the AI planner according to their preferences and expertise. In this context, explanations that respond to users' questions are crucial to improve their understanding of potential solutions and increase their trust in the system. To enable natural interaction with such a system, we present a multi-agent Large Language Model (LLM) architecture that is agnostic to the explanation framework and enables user- and context-dependent interactive explanations. We also describe an instantiation of this framework for goal-conflict explanations, which we use to conduct a user study comparing the LLM-powered interaction with a baseline template-based explanation interface.
翻译:在现实世界序列决策问题中实现规划自动化的目标通常并非取代人类规划者,而是促进迭代推理与需求启发过程,其中人类的作用是根据自身偏好与专业知识引导AI规划器。在此背景下,能够响应用户提问的解释机制对于提升用户对潜在解决方案的理解、增强系统可信度至关重要。为实现与此类系统的自然交互,我们提出一种与解释框架无关的多智能体大语言模型架构,该架构支持基于用户与情境的交互式解释。本文同时描述了该框架在目标冲突解释中的具体实现,并以此开展用户研究,对比基于LLM的交互方式与基线模板化解释界面的效果。