Training deep reinforcement learning (RL) agents necessitates overcoming the highly unstable nonconvex stochastic optimization inherent in the trial-and-error mechanism. To tackle this challenge, we propose a physics-inspired optimization algorithm called relativistic adaptive gradient descent (RAD), which enhances long-term training stability. By conceptualizing neural network (NN) training as the evolution of a conformal Hamiltonian system, we present a universal framework for transferring long-term stability from conformal symplectic integrators to iterative NN updating rules, where the choice of kinetic energy governs the dynamical properties of resulting optimization algorithms. By utilizing relativistic kinetic energy, RAD incorporates principles from special relativity and limits parameter updates below a finite speed, effectively mitigating abnormal gradient influences. Additionally, RAD models NN optimization as the evolution of a multi-particle system where each trainable parameter acts as an independent particle with an individual adaptive learning rate. We prove RAD's sublinear convergence under general nonconvex settings, where smaller gradient variance and larger batch sizes contribute to tighter convergence. Notably, RAD degrades to the well-known adaptive moment estimation (ADAM) algorithm when its speed coefficient is chosen as one and symplectic factor as a small positive value. Experimental results show RAD outperforming nine baseline optimizers with five RL algorithms across twelve environments, including standard benchmarks and challenging scenarios. Notably, RAD achieves up to a 155.1% performance improvement over ADAM in Atari games, showcasing its efficacy in stabilizing and accelerating RL training.
翻译:训练深度强化学习(RL)智能体需要克服试错机制中固有的高度不稳定的非凸随机优化问题。为应对这一挑战,我们提出一种受物理学启发的优化算法——相对论自适应梯度下降(RAD),该算法能增强长期训练的稳定性。通过将神经网络(NN)训练概念化为共形哈密顿系统的演化,我们提出了一个通用框架,可将共形辛积分器的长期稳定性转移至迭代式神经网络更新规则,其中动能的选择决定了所得优化算法的动力学特性。通过采用相对论动能,RAD融合了狭义相对论原理并将参数更新限制在有限速度以下,从而有效抑制异常梯度的影响。此外,RAD将神经网络优化建模为多粒子系统的演化过程,其中每个可训练参数作为具有独立自适应学习率的粒子。我们证明了RAD在一般非凸设置下的次线性收敛性,其中较小的梯度方差和较大的批次规模有助于获得更紧的收敛界。值得注意的是,当速度系数取值为1且辛因子取较小正值时,RAD可退化为著名的自适应矩估计(ADAM)算法。实验结果表明,在包含标准基准测试和挑战性场景的十二个环境中,RAD配合五种RL算法在性能上优于九种基线优化器。特别值得注意的是,在Atari游戏中RAD相比ADAM实现了最高达155.1%的性能提升,充分证明了其在稳定和加速RL训练方面的有效性。