In this work, we study an inverse reinforcement learning (IRL) problem where the experts are planning under a shared reward function but with different, unknown planning horizons. Without the knowledge of discount factors, the reward function has a larger feasible solution set, which makes it harder for existing IRL approaches to identify a reward function. To overcome this challenge, we develop algorithms that can learn a global multi-agent reward function with agent-specific discount factors that reconstruct the expert policies. We characterize the feasible solution space of the reward function and discount factors for both algorithms and demonstrate the generalizability of the learned reward function across multiple domains.
翻译:本研究探讨了一种逆强化学习问题,其中专家在共享奖励函数但采用不同且未知的规划视野的情况下进行决策。在折扣因子未知的条件下,奖励函数具有更大的可行解集,这使得现有逆强化学习方法难以准确识别奖励函数。为克服这一挑战,我们开发了能够学习全局多智能体奖励函数的算法,该算法通过智能体特定的折扣因子重构专家策略。我们针对两种算法分别刻画了奖励函数与折扣因子的可行解空间,并在多个领域中验证了所学奖励函数的泛化能力。