This paper introduces a novel approach Counterfactual Shapley Values (CSV), which enhances explainability in reinforcement learning (RL) by integrating counterfactual analysis with Shapley Values. The approach aims to quantify and compare the contributions of different state dimensions to various action choices. To more accurately analyze these impacts, we introduce new characteristic value functions, the ``Counterfactual Difference Characteristic Value" and the ``Average Counterfactual Difference Characteristic Value." These functions help calculate the Shapley values to evaluate the differences in contributions between optimal and non-optimal actions. Experiments across several RL domains, such as GridWorld, FrozenLake, and Taxi, demonstrate the effectiveness of the CSV method. The results show that this method not only improves transparency in complex RL systems but also quantifies the differences across various decisions.
翻译:本文提出了一种新颖的反事实Shapley值(CSV)方法,该方法通过将反事实分析与Shapley值相结合,增强了强化学习(RL)的可解释性。该方法旨在量化和比较不同状态维度对各种动作选择的贡献。为了更准确地分析这些影响,我们引入了新的特征值函数——“反事实差异特征值”与“平均反事实差异特征值”。这些函数有助于计算Shapley值,以评估最优动作与非最优动作之间贡献度的差异。在GridWorld、FrozenLake和Taxi等多个强化学习领域进行的实验验证了CSV方法的有效性。结果表明,该方法不仅提升了复杂强化学习系统的透明度,还能量化不同决策之间的差异。