Traditional Reinforcement Learning (RL) policies are typically implemented with fixed control rates, often disregarding the impact of control rate selection. This can lead to inefficiencies as the optimal control rate varies with task requirements. We propose the Multi-Objective Soft Elastic Actor-Critic (MOSEAC), an off-policy actor-critic algorithm that uses elastic time steps to dynamically adjust the control frequency. This approach minimizes computational resources by selecting the lowest viable frequency. We show that MOSEAC converges and produces stable policies at the theoretical level, and validate our findings in a real-time 3D racing game. MOSEAC significantly outperformed other variable time step approaches in terms of energy efficiency and task effectiveness. Additionally, MOSEAC demonstrated faster and more stable training, showcasing its potential for real-world RL applications in robotics.
翻译:传统的强化学习策略通常以固定的控制频率实现,往往忽略了控制频率选择的影响。这可能导致效率低下,因为最优控制频率随任务需求而变化。我们提出了多目标软弹性行动者-评论家算法,这是一种采用弹性时间步长动态调整控制频率的非策略行动者-评论家算法。该方法通过选择最低可行频率来最小化计算资源消耗。我们在理论层面证明了MOSEAC的收敛性及其生成稳定策略的能力,并在实时3D赛车游戏中验证了研究结果。在能源效率与任务效能方面,MOSEAC显著优于其他可变时间步长方法。此外,MOSEAC展现出更快且更稳定的训练过程,彰显了其在机器人等现实世界强化学习应用中的潜力。