Scaling deep reinforcement learning networks is challenging and often results in degraded performance, yet the root causes of this failure mode remain poorly understood. Several recent works have proposed mechanisms to address this, but they are often complex and fail to highlight the causes underlying this difficulty. In this work, we conduct a series of empirical analyses which suggest that the combination of non-stationarity with gradient pathologies, due to suboptimal architectural choices, underlie the challenges of scale. We propose a series of direct interventions that stabilize gradient flow, enabling robust performance across a range of network depths and widths. Our interventions are simple to implement and compatible with well-established algorithms, and result in an effective mechanism that enables strong performance even at large scales. We validate our findings on a variety of agents and suites of environments.
翻译:规模化深度强化学习网络具有挑战性,且常导致性能下降,然而这种失效模式的根本原因仍不甚明了。近期多项研究提出了应对机制,但这些机制往往复杂,且未能凸显导致此困难的根本原因。在本工作中,我们进行了一系列实证分析,结果表明,非平稳性与因次优架构选择导致的梯度病态相结合,构成了规模化挑战的根源。我们提出了一系列直接干预措施以稳定梯度流,从而在一系列网络深度与宽度上实现稳健性能。我们的干预措施易于实现,且与成熟算法兼容,最终形成了一种即使在大型规模下也能实现强劲性能的有效机制。我们在多种智能体与环境套件上验证了我们的发现。