Building effective human-robot interaction requires robots to derive conclusions from their experiences that are both logically sound and communicated in ways aligned with human expectations. This paper presents a hybrid framework that blends ontology-based reasoning with large language models (LLMs) to produce semantically grounded and natural robot explanations. Ontologies ensure logical consistency and domain grounding, while LLMs provide fluent, context-aware and adaptive language generation. The proposed method grounds data from human-robot experiences, enabling robots to reason about whether events are typical or atypical based on their properties. We integrate a state-of-the-art algorithm for retrieving and constructing static contrastive ontology-based narratives with an LLM agent that uses them to produce concise, clear, interactive explanations. The approach is validated through a laboratory study replicating an industrial collaborative task. Empirical results show significant improvements in the clarity and brevity of ontology-based narratives while preserving their semantic accuracy. Initial evaluations further demonstrate the system's ability to adapt explanations to user feedback. Overall, this work highlights the potential of ontology-LLM integration to advance explainable agency, and promote more transparent human-robot collaboration.
翻译:构建有效的人机交互需要机器人从自身经验中推导出既逻辑严密、又以符合人类预期的方式传达的结论。本文提出了一种混合框架,将基于本体的推理与大型语言模型(LLMs)相结合,以生成语义基础扎实且表达自然的机器人解释。本体确保了逻辑一致性和领域基础,而LLMs则提供了流畅、上下文感知且自适应的语言生成能力。所提出的方法将人机交互经验中的数据加以基础化,使机器人能够根据事件属性判断其属于典型或非典型。我们集成了一种用于检索和构建静态对比性本体叙事的最先进算法,以及一个利用这些叙事来生成简洁、清晰、交互式解释的LLM智能体。该方法通过一项复现工业协作任务的实验室研究得到验证。实证结果表明,在保持语义准确性的同时,基于本体的叙事在清晰度和简洁性方面均有显著提升。初步评估进一步证明了系统能够根据用户反馈调整解释。总体而言,这项工作凸显了本体与LLM融合在推进可解释智能体发展、促进更透明的人机协作方面的潜力。