Inverse reinforcement learning (IRL) and dynamic discrete choice (DDC) models explain sequential decision-making by recovering reward functions that rationalize observed behavior. Flexible IRL methods typically rely on machine learning but provide no guarantees for valid inference, while classical DDC approaches impose restrictive parametric specifications and often require repeated dynamic programming. We develop a semiparametric framework for debiased inverse reinforcement learning that yields statistically efficient inference for a broad class of reward-dependent functionals in maximum entropy IRL and Gumbel-shock DDC models. We show that the log-behavior policy acts as a pseudo-reward that point-identifies policy value differences and, under a simple normalization, the reward itself. We then formalize these targets, including policy values under known and counterfactual softmax policies and functionals of the normalized reward, as smooth functionals of the behavior policy and transition kernel, establish pathwise differentiability, and derive their efficient influence functions. Building on this characterization, we construct automatic debiased machine-learning estimators that allow flexible nonparametric estimation of nuisance components while achieving $\sqrt{n}$-consistency, asymptotic normality, and semiparametric efficiency. Our framework extends classical inference for DDC models to nonparametric rewards and modern machine-learning tools, providing a unified and computationally tractable approach to statistical inference in IRL.
翻译:逆强化学习(IRL)与动态离散选择(DDC)模型通过恢复能够合理解释观测行为的奖励函数,来解释序列决策过程。灵活的IRL方法通常依赖于机器学习,但无法保证有效推理;而经典的DDC方法则施加了限制性的参数化设定,且常需重复进行动态规划。我们为去偏逆强化学习开发了一个半参数框架,该框架在最大熵IRL与Gumbel冲击DDC模型中,为一大类依赖于奖励的函数提供了统计高效的推理。我们证明,对数行为策略充当了一个伪奖励,它点识别了策略价值差异,并在简单归一化条件下识别了奖励本身。随后,我们将这些目标——包括已知策略与反事实softmax策略下的策略价值,以及归一化奖励的函数——形式化为行为策略与转移核的光滑函数,建立了路径可微性,并推导了它们的有效影响函数。基于此特征,我们构建了自动去偏机器学习估计器,允许对冗余成分进行灵活的非参数估计,同时实现$\sqrt{n}$一致性、渐近正态性与半参数效率。我们的框架将DDC模型的经典推理扩展至非参数奖励与现代机器学习工具,为IRL中的统计推理提供了一种统一且计算上易处理的方法。