The prevailing paradigm for improving large language models relies on offline training with human annotations or simulated environments, leaving the rich experience accumulated during real-world deployment entirely unexploited. We propose Online Experiential Learning (OEL), a framework that enables language models to continuously improve from their own deployment experience. OEL operates in two stages: first, transferable experiential knowledge is extracted and accumulated from interaction trajectories collected on the user side; second, this knowledge is consolidated into model parameters via on-policy context distillation, requiring no access to the user-side environment. The two stages are iterated to form an online learning loop, where the improved model collects higher-quality trajectories that yield richer experiential knowledge for subsequent rounds. We evaluate OEL on text-based game environments across multiple model scales and both thinking and non-thinking variants. OEL achieves consistent improvements over successive iterations, enhancing both task accuracy and token efficiency while preserving out-of-distribution performance. Our analysis further shows that extracted experiential knowledge is significantly more effective than raw trajectories, and that on-policy consistency between the knowledge source and the policy model is critical for effective learning.
翻译:当前改进大型语言模型的主流范式依赖于使用人工标注或模拟环境进行离线训练,完全未利用实际部署过程中积累的丰富经验。我们提出在线体验式学习框架,使语言模型能够从其自身部署经验中持续改进。OEL 分两个阶段运行:首先,从用户端收集的交互轨迹中提取并积累可迁移的体验知识;其次,通过同策略上下文蒸馏将这些知识整合到模型参数中,此过程无需访问用户端环境。这两个阶段迭代进行形成在线学习循环,改进后的模型可收集更高质量的轨迹,从而为后续轮次提供更丰富的体验知识。我们在基于文本的游戏环境中评估了OEL,涵盖多种模型规模及思考与非思考变体。OEL在连续迭代中实现了稳定提升,在保持分布外性能的同时,提高了任务准确性与令牌效率。进一步分析表明,提取的体验知识显著优于原始轨迹,且知识源与策略模型间的同策略一致性对有效学习至关重要。