We study the Inverse Contextual Bandit (ICB) problem, in which a learner seeks to optimize a policy while an observer, who cannot access the learner's rewards and only observes actions, aims to recover the underlying problem parameters. During the learning process, the learner's behavior naturally transitions from exploration to exploitation, resulting in non-stationary action data that poses significant challenges for the observer. To address this issue, we propose a simple and effective framework called Two-Phase Suffix Imitation. The framework discards data from an initial burn-in phase and performs empirical risk minimization using only data from a subsequent imitation phase. We derive a predictive decision loss bound that explicitly characterizes the bias-variance trade-off induced by the choice of burn-in length. Despite the severe information deficit, we show that a reward-free observer can achieve a convergence rate of $\tilde O(1/\sqrt{N})$, matching the asymptotic efficiency of a fully reward-aware learner. This result demonstrates that a passive observer can effectively uncover the optimal policy from actions alone, attaining performance comparable to that of the learner itself.
翻译:本文研究逆上下文赌博机(ICB)问题,其中学习者旨在优化策略,而观察者无法获取学习者的奖励信号、仅能观测其动作,其目标是恢复底层问题参数。在学习过程中,学习者的行为会自然地从探索阶段过渡到利用阶段,从而产生非平稳的动作数据,这给观察者带来了显著挑战。为解决该问题,我们提出一种简洁有效的框架——两阶段后缀模仿。该框架舍弃初始预热阶段的数据,仅使用后续模仿阶段的数据进行经验风险最小化。我们推导出预测性决策损失界,该界限显式刻画了由预热阶段长度选择引起的偏差-方差权衡。尽管存在严重的信息缺失,我们证明无奖励信号的观察者仍能达到$\tilde O(1/\sqrt{N})$的收敛速率,与完全掌握奖励信息的学习者的渐近效率相匹配。该结果表明,被动观察者仅通过动作数据即可有效揭示最优策略,获得与学习者自身相当的性能表现。