Given a dataset of expert demonstrations, inverse reinforcement learning (IRL) aims to recover a reward for which the expert is optimal. This work proposes a model-free algorithm to solve entropy-regularized IRL problem. In particular, we employ a stochastic gradient descent update for the reward and a stochastic soft policy iteration update for the policy. Assuming access to a generative model, we prove that our algorithm is guaranteed to recover a reward for which the expert is $\varepsilon$-optimal using $\mathcal{O}(1/\varepsilon^{2})$ samples of the Markov decision process (MDP). Furthermore, with $\mathcal{O}(1/\varepsilon^{4})$ samples we prove that the optimal policy corresponding to the recovered reward is $\varepsilon$-close to the expert policy in total variation distance.
翻译:给定专家演示数据集,逆强化学习旨在恢复使该专家达到最优的奖励函数。本文提出一种无模型算法以解决熵正则化逆强化学习问题。我们特别采用随机梯度下降更新奖励函数,并结合随机软策略迭代更新策略。假设可访问生成模型,我们证明该算法保证在马尔可夫决策过程的 $\mathcal{O}(1/\varepsilon^{2})$ 样本量下恢复使专家达到 $\varepsilon$-最优的奖励函数。此外,在 $\mathcal{O}(1/\varepsilon^{4})$ 样本量下,我们证明恢复奖励对应的最优策略与专家策略在总变差距离上 $\varepsilon$-接近。