Imitation learning (IL) is a general paradigm for learning from experts in sequential decision-making problems. Recent advancements in IL have shown that offline imitation learning, specifically Behavior Cloning (BC) with log loss, is minimax optimal. Meanwhile, its interactive counterpart, DAgger, is shown to suffer from suboptimal sample complexity. In this note, we focus on realizable deterministic expert and revisit interactive imitation learning, particularly DAgger with log loss. We demonstrate: 1. A one-sample-per-round DAgger variant that outperforms BC in state-wise annotation. 2. Without recoverability assumption, DAgger with first-step mixture policies matches the performance of BC. Along the analysis, we introduce a new notion of decoupled Hellinger distance that separates state and action sequences, which can be of independent interest.
翻译:模仿学习(IL)是序列决策问题中向专家学习的一般范式。IL 的最新进展表明,离线模仿学习,特别是使用对数损失的行为克隆(BC),是极小极大最优的。与此同时,其交互式对应方法 DAgger 则被证明存在次优的样本复杂度。在本注记中,我们关注可实现的确定性专家,并重新审视交互式模仿学习,特别是使用对数损失的 DAgger。我们证明:1. 一种每轮单样本的 DAgger 变体在状态级标注上优于 BC。2. 在没有可恢复性假设的情况下,采用第一步混合策略的 DAgger 可以达到与 BC 相当的性能。在分析过程中,我们引入了一种新的解耦 Hellinger 距离概念,该概念将状态序列与动作序列分离,这可能具有独立的研究价值。