In-context reinforcement learning (ICRL) promises fast adaptation to unseen environments without parameter updates, but current methods either cannot improve beyond the training distribution or require near-optimal data, limiting practical adoption. We introduce SPICE, a Bayesian ICRL method that learns a prior over Q-values via deep ensemble and updates this prior at test-time using in-context information through Bayesian updates. To recover from poor priors resulting from training on sub-optimal data, our online inference follows an Upper-Confidence Bound rule that favours exploration and adaptation. We prove that SPICE achieves regret-optimal behaviour in both stochastic bandits and finite-horizon MDPs, even when pretrained only on suboptimal trajectories. We validate these findings empirically across bandit and control benchmarks. SPICE achieves near-optimal decisions on unseen tasks, substantially reduces regret compared to prior ICRL and meta-RL approaches while rapidly adapting to unseen tasks and remaining robust under distribution shift.
翻译:上下文强化学习(ICRL)有望在无需参数更新的情况下快速适应未见环境,但现有方法要么无法超越训练分布进行改进,要么需要接近最优的数据,这限制了其实际应用。我们提出了SPICE,一种贝叶斯ICRL方法,它通过深度集成学习Q值的先验分布,并在测试时通过贝叶斯更新利用上下文信息更新该先验。为了从基于次优数据训练导致的较差先验中恢复,我们的在线推理遵循上置信界规则,该规则鼓励探索与适应。我们证明,即使在仅使用次优轨迹进行预训练的情况下,SPICE在随机多臂赌博机和有限时域马尔可夫决策过程中均能实现遗憾最优的行为。我们在赌博机和控制基准测试中实证验证了这些发现。SPICE在未见任务上实现了接近最优的决策,与先前的ICRL和元强化学习方法相比显著降低了遗憾,同时能快速适应未见任务并在分布偏移下保持鲁棒性。