In scenarios of inverse reinforcement learning (IRL) with a single expert, adversarial inverse reinforcement learning (AIRL) serves as a foundational approach to providing comprehensive and transferable task descriptions by restricting the reward class, e.g., to state-only rewards. However, AIRL faces practical challenges, primarily stemming from the difficulty of verifying the unobservable transition matrix - often encountered in practice - under the specific conditions necessary for effective transfer. This paper reexamines AIRL in light of the unobservable transition matrix or limited informative priors. By applying random matrix theory (RMT), we demonstrate that AIRL can disentangle rewards for effective transfer with high probability, irrespective of specific conditions. This perspective reframes inadequate transfer in certain contexts. Specifically, it is attributed to the selection problem of the reinforcement learning algorithm employed by AIRL, which is characterized by training variance. Based on this insight, we propose a hybrid framework that integrates on-policy proximal policy optimization (PPO) in the source environment with off-policy soft actor-critic (SAC) in the target environment, leading to significant improvements in reward transfer effectiveness.
翻译:在单一专家场景下的逆向强化学习(IRL)中,对抗式逆向强化学习(AIRL)通过限制奖励类别(例如,仅限于状态奖励)来提供全面且可迁移的任务描述,是一种基础性方法。然而,AIRL面临实际挑战,主要源于在实现有效迁移所需特定条件下,难以验证实践中常遇到的不可观测转移矩阵。本文针对不可观测转移矩阵或有限信息先验的情况,重新审视了AIRL。通过应用随机矩阵理论(RMT),我们证明AIRL能够以高概率解耦奖励以实现有效迁移,且不依赖于特定条件。这一视角重新诠释了在某些情境下迁移效果不佳的问题。具体而言,这归因于AIRL所采用的强化学习算法的选择问题,其特点在于训练方差。基于此洞见,我们提出了一种混合框架,该框架在源环境中集成同策略近端策略优化(PPO),在目标环境中集成异策略软演员-评论家(SAC),从而显著提升了奖励迁移的有效性。