Transformers have demonstrated exceptional performance across a wide range of domains. While their ability to perform reinforcement learning in-context has been established both theoretically and empirically, their behavior in non-stationary environments remains less understood. In this study, we address this gap by showing that transformers can achieve nearly optimal dynamic regret bounds in non-stationary settings. We prove that transformers are capable of approximating strategies used to handle non-stationary environments and can learn the approximator in the in-context learning setup. Our experiments further show that transformers can match or even outperform existing expert algorithms in such environments.
翻译:Transformer 已在众多领域展现出卓越性能。尽管其通过上下文学习进行强化学习的能力已在理论和实证层面得到验证,但它们在非平稳环境中的行为仍缺乏深入理解。本研究填补了这一空白,证明了 Transformer 在非平稳环境下能够达到近乎最优的动态遗憾界。我们证明,Transformer 能够逼近用于处理非平稳环境的策略,并能在上下文学习框架中习得该逼近器。我们的实验进一步表明,在此类环境中,Transformer 能够匹配甚至超越现有的专家算法。