We present GuidedSAC, a novel reinforcement learning (RL) algorithm that facilitates efficient exploration in vast state-action spaces. GuidedSAC leverages large language models (LLMs) as intelligent supervisors that provide action-level guidance for the Soft Actor-Critic (SAC) algorithm. The LLM-based supervisor analyzes the most recent trajectory using state information and visual replays, offering action-level interventions that enable targeted exploration. Furthermore, we provide a theoretical analysis of GuidedSAC, proving that it preserves the convergence guarantees of SAC while improving convergence speed. Through experiments in both discrete and continuous control environments, including toy text tasks and complex MuJoCo benchmarks, we demonstrate that GuidedSAC consistently outperforms standard SAC and state-of-the-art exploration-enhanced variants (e.g., RND, ICM, and E3B) in terms of sample efficiency and final performance.
翻译:本文提出GuidedSAC——一种新颖的强化学习算法,旨在促进广阔状态-动作空间中的高效探索。该方法利用大型语言模型作为智能监督器,为软演员-评论家算法提供动作级引导。基于LLM的监督器通过状态信息与视觉回放分析最近轨迹,提供可实现定向探索的动作级干预。此外,我们对GuidedSAC进行了理论分析,证明其在保持SAC收敛保证的同时提升了收敛速度。通过在离散与连续控制环境(包括文本玩具任务和复杂MuJoCo基准测试)中的实验,我们证明GuidedSAC在样本效率和最终性能方面均持续优于标准SAC及前沿的探索增强变体(如RND、ICM和E3B)。