On-policy deep reinforcement learning remains a dominant paradigm for continuous control, yet standard implementations rely on Gaussian actors and relatively shallow MLP policies, often leading to brittle optimization when gradients are noisy and policy updates must be conservative. In this paper, we revisit policy representation as a first-class design choice for on-policy optimization. We study discretized categorical actors that represent each action dimension with a distribution over bins, yielding a policy objective that resembles a cross-entropy loss. Building on architectural advances from supervised learning, we further propose regularized actor networks, while keeping critic design fixed. Our results show that simply replacing the standard actor network with our discretized regularized actor yields consistent gains and achieve the state-of-the-art performance across diverse continuous-control benchmarks.
翻译:在线深度强化学习仍然是连续控制领域的主导范式,但标准实现依赖于高斯行动器和相对浅层的MLP策略,这通常在梯度噪声较大且策略更新必须保守时导致脆弱的优化。本文重新审视策略表示,将其作为在线策略优化的一流设计选择进行研究。我们提出离散分类行动器,通过分箱上的分布来表示每个动作维度,从而产生类似于交叉熵损失的策略目标函数。基于监督学习的架构进展,我们进一步提出正则化行动器网络,同时保持评论家设计不变。实验结果表明,仅用我们的离散正则化行动器替换标准行动器网络即可获得一致的性能提升,并在多样化的连续控制基准测试中达到最先进的性能水平。