Reinforcement Learning (RL) has shown great potential in complex control tasks, particularly when combined with deep neural networks within the Actor-Critic (AC) framework. However, in practical applications, balancing exploration, learning stability, and sample efficiency remains a significant challenge. Traditional methods such as Soft Actor-Critic (SAC) and Proximal Policy Optimization (PPO) address these issues by incorporating entropy or relative entropy regularization, but often face problems of instability and low sample efficiency. In this paper, we propose the Conservative Soft Actor-Critic (CSAC) algorithm, which seamlessly integrates entropy and relative entropy regularization within the AC framework. CSAC improves exploration through entropy regularization while avoiding overly aggressive policy updates with the use of relative entropy regularization. Evaluations on benchmark tasks and real-world robotic simulations demonstrate that CSAC offers significant improvements in stability and efficiency over existing methods. These findings suggest that CSAC provides strong robustness and application potential in control tasks under dynamic environments.
翻译:强化学习(RL)在复杂控制任务中展现出巨大潜力,尤其是在演员-评论家(AC)框架中结合深度神经网络时。然而在实际应用中,平衡探索、学习稳定性与样本效率仍是重大挑战。传统方法如软演员-评论家(SAC)和近端策略优化(PPO)通过引入熵或相对熵正则化处理这些问题,但常面临不稳定性和低样本效率的困境。本文提出保守软演员-评论家(CSAC)算法,将熵正则化与相对熵正则化无缝集成于AC框架中。CSAC通过熵正则化增强探索能力,同时借助相对熵正则化避免策略更新过于激进。在基准任务和真实机器人仿真中的评估表明,CSAC在稳定性与效率方面较现有方法有显著提升。这些发现证明CSAC在动态环境下的控制任务中具有强鲁棒性和实际应用潜力。