Understanding the agent's learning process, particularly the factors that contribute to its success or failure post-training, is crucial for comprehending the rationale behind the agent's decision-making process. Prior methods clarify the learning process by creating a structural causal model (SCM) or visually representing the distribution of value functions. Nevertheless, these approaches have constraints as they exclusively function in 2D-environments or with uncomplicated transition dynamics. Understanding the agent's learning process in complicated environments or tasks is more challenging. In this paper, we propose REVEAL-IT, a novel framework for explaining the learning process of an agent in complex environments. Initially, we visualize the policy structure and the agent's learning process for various training tasks. By visualizing these findings, we can understand how much a particular training task or stage affects the agent's performance in test. Then, a GNN-based explainer learns to highlight the most important section of the policy, providing a more clear and robust explanation of the agent's learning process. The experiments demonstrate that explanations derived from this framework can effectively help in the optimization of the training tasks, resulting in improved learning efficiency and final performance.
翻译:理解智能体的学习过程,特别是那些在训练后促成其成功或失败的因素,对于理解智能体决策过程背后的原理至关重要。先前的方法通过构建结构因果模型(SCM)或可视化价值函数的分布来阐明学习过程。然而,这些方法存在局限性,因为它们仅适用于二维环境或具有简单转移动态的场景。在复杂环境或任务中理解智能体的学习过程更具挑战性。本文提出REVEAL-IT,一种用于解释智能体在复杂环境中学习过程的新颖框架。首先,我们针对不同的训练任务,可视化策略结构及智能体的学习过程。通过可视化这些发现,我们可以理解特定训练任务或阶段在多大程度上影响智能体在测试中的表现。随后,一个基于图神经网络(GNN)的解释器学习如何突出策略中最重要的部分,从而为智能体的学习过程提供更清晰、更鲁棒的解释。实验表明,该框架产生的解释能有效帮助优化训练任务,从而提高学习效率和最终性能。