Reinforcement Learning (RL) remains a central optimisation framework in machine learning. Although RL agents can converge to optimal solutions, the definition of ``optimality'' depends on the environment's statistical properties. The Bellman equation, central to most RL algorithms, is formulated in terms of expected values of future rewards. However, when ergodicity is broken, long-term outcomes depend on the specific trajectory rather than on the ensemble average. In such settings, the ensemble average diverges from the time-average growth experienced by individual agents, with expected-value formulations yielding systematically suboptimal policies. Prior studies demonstrated that traditional RL architectures fail to recover the true optimum in non-ergodic environments. We extend this analysis to deep RL implementations and show that these, too, produce suboptimal policies under non-ergodic dynamics. Introducing explicit time dependence into the learning process can correct this limitation. By allowing the network's function approximation to incorporate temporal information, the agent can estimate value functions consistent with the process's intrinsic growth rate. This improvement does not require altering the environmental feedback, such as reward transformations or modified objective functions, but arises naturally from the agent's exposure to temporal trajectories. Our results contribute to the growing body of research on reinforcement learning methods for non-ergodic systems.
翻译:强化学习(Reinforcement Learning, RL)依然是机器学习中的核心优化框架。尽管RL智能体能够收敛至最优解,但“最优性”的定义取决于环境的统计特性。作为大多数RL算法核心的贝尔曼方程,其构建基于未来奖励的期望值。然而,当遍历性被打破时,长期结果取决于特定的轨迹而非系综平均。在此类设定下,系综平均与个体智能体所经历的时间平均增长相偏离,导致基于期望值构建的策略系统性地次优。先前的研究已证明,传统RL架构在非遍历环境中无法恢复真正的最优解。我们将此分析扩展至深度RL实现,并表明这些方法在非遍历动态下同样会产生次优策略。在学习过程中引入显式的时间依赖性可以纠正这一局限。通过允许网络函数逼近器纳入时间信息,智能体能够估计与过程内在增长率一致的价值函数。这一改进无需改变环境反馈(如奖励转换或修改目标函数),而是自然地源于智能体对时间轨迹的接触。我们的研究结果为非遍历系统强化学习方法不断增长的研究体系做出了贡献。