Value-based algorithms are a cornerstone of off-policy reinforcement learning due to their simplicity and training stability. However, their use has traditionally been restricted to discrete action spaces, as they rely on estimating Q-values for individual state-action pairs. In continuous action spaces, evaluating the Q-value over the entire action space becomes computationally infeasible. To address this, actor-critic methods are typically employed, where a critic is trained on off-policy data to estimate Q-values, and an actor is trained to maximize the critic's output. Despite their popularity, these methods often suffer from instability during training. In this work, we propose a purely value-based framework for continuous control that revisits structural maximization of Q-functions, introducing a set of key architectural and algorithmic choices to enable efficient and stable learning. We evaluate the proposed actor-free Q-learning approach on a range of standard simulation tasks, demonstrating performance and sample efficiency on par with state-of-the-art baselines, without the cost of learning a separate actor. Particularly, in environments with constrained action spaces, where the value functions are typically non-smooth, our method with structural maximization outperforms traditional actor-critic methods with gradient-based maximization. We have released our code at https://github.com/USC-Lira/Q3C.
翻译:基于值的算法因其简洁性和训练稳定性而成为离策略强化学习的基石。然而,由于这类算法依赖于估计单个状态-动作对的Q值,其应用传统上局限于离散动作空间。在连续动作空间中,评估整个动作空间上的Q值在计算上变得不可行。为解决此问题,通常采用执行器-评论器方法,其中评论器基于离策略数据训练以估计Q值,而执行器则训练用于最大化评论器的输出。尽管这些方法广受欢迎,但其训练过程常受不稳定性困扰。在本工作中,我们提出了一种纯粹的基于值的连续控制框架,该框架重新审视了Q函数的结构最大化,通过引入一系列关键架构与算法选择,实现了高效稳定的学习。我们在标准模拟任务集上评估了所提出的无执行器Q学习方法,结果表明其性能与样本效率均达到与最先进基线相当的水平,且无需学习独立的执行器。特别是在具有约束动作空间的环境中(此类环境的价值函数通常是非光滑的),我们采用结构最大化的方法优于传统基于梯度最大化的执行器-评论器方法。我们已在 https://github.com/USC-Lira/Q3C 发布代码。