In this paper, we study Interaction-Grounded Learning (IGL) [Xie et al., 2021], a paradigm designed for realistic scenarios where the learner receives indirect feedback generated by an unknown mechanism, rather than explicit numerical rewards. While prior work on IGL provides efficient algorithms with provable guarantees, those results are confined to single-step settings, restricting their applicability to modern sequential decision-making systems such as multi-turn Large Language Model (LLM) deployments. To bridge this gap, we propose a computationally efficient algorithm that achieves a sublinear regret guarantee for contextual episodic Markov Decision Processes (MDPs) with personalized feedback. Technically, we extend the reward-estimator construction of Zhang et al. [2024a] from the single-step to the multi-step setting, addressing the unique challenges of decoding latent rewards under MDPs. Building on this estimator, we design an Inverse-Gap-Weighting (IGW) algorithm for policy optimization. Finally, we demonstrate the effectiveness of our method in learning personalized objectives from multi-turn interactions through experiments on both a synthetic episodic MDP and a real-world user booking dataset.
翻译:本文研究交互式基础学习(IGL)[Xie et al., 2021],该范式专为现实场景设计,其中学习者接收由未知机制生成的间接反馈,而非显式数值奖励。尽管先前关于IGL的研究提供了具有可证明保证的高效算法,但这些结果仅限于单步设置,限制了其在现代序列决策系统(如多轮大型语言模型部署)中的适用性。为弥合这一差距,我们提出一种计算高效的算法,该算法在具有个性化反馈的上下文片段马尔可夫决策过程中实现了次线性遗憾保证。在技术上,我们将Zhang等人[2024a]的奖励估计器构建从单步设置扩展到多步设置,解决了在MDP下解码潜在奖励的独特挑战。基于此估计器,我们设计了一种用于策略优化的逆间隙加权算法。最后,通过在合成片段MDP和真实世界用户预订数据集上的实验,我们证明了该方法在从多轮交互中学习个性化目标方面的有效性。