We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.
翻译:我们提出了一种新颖框架,旨在扩展常用于人机协同规划中的模型调和方法,以增强人机交互能力。通过采用基于结构化论证的对话范式,该框架能够实现辩证调和,以解决解释方(AI智能体)与接收方(人类用户)之间的知识差异,其目标是使接收方理解解释方的决策。我们形式化描述了所提出框架的操作语义,并提供了理论保证。随后通过计算实验和人类受试者实验对该框架在真实场景中的有效性进行了评估。研究结果表明,在可解释性至关重要的领域,该框架为促进有效的人机交互提供了有前景的研究方向。