Akin to neuroplasticity in human brains, the plasticity of deep neural networks enables their quick adaption to new data. This makes plasticity particularly crucial for deep Reinforcement Learning (RL) agents: Once plasticity is lost, an agent's performance will inevitably plateau because it cannot improve its policy to account for changes in the data distribution, which are a necessary consequence of its learning process. Thus, developing well-performing and sample-efficient agents hinges on their ability to remain plastic during training. Furthermore, the loss of plasticity can be connected to many other issues plaguing deep RL, such as training instabilities, scaling failures, overestimation bias, and insufficient exploration. With this survey, we aim to provide an overview of the emerging research on plasticity loss for academics and practitioners of deep reinforcement learning. First, we propose a unified definition of plasticity loss based on recent works, relate it to definitions from the literature, and discuss metrics for measuring plasticity loss. Then, we categorize and discuss numerous possible causes of plasticity loss before reviewing currently employed mitigation strategies. Our taxonomy is the first systematic overview of the current state of the field. Lastly, we discuss prevalent issues within the literature, such as a necessity for broader evaluation, and provide recommendations for future research, like gaining a better understanding of an agent's neural activity and behavior.
翻译:类似于人脑中的神经可塑性,深度神经网络的可塑性使其能够快速适应新数据。这使得可塑性对于深度强化学习智能体尤为关键:一旦可塑性丧失,智能体的性能将不可避免地停滞,因为它无法改进其策略以适应数据分布的变化——这种变化是其学习过程的必然结果。因此,开发性能优异且样本高效的智能体,关键在于其在训练期间保持可塑性的能力。此外,可塑性的丧失可能与困扰深度强化学习的许多其他问题相关联,例如训练不稳定性、扩展失败、高估偏差以及探索不足。通过本综述,我们旨在为深度强化学习的研究者和实践者提供关于可塑性丧失这一新兴研究领域的概览。首先,我们基于近期研究提出了可塑性丧失的统一定义,将其与文献中的定义相关联,并讨论了衡量可塑性丧失的指标。接着,我们对可塑性丧失的众多可能原因进行了分类和探讨,随后回顾了当前采用的缓解策略。我们的分类体系是该领域当前状态的首次系统性概述。最后,我们讨论了文献中普遍存在的问题,例如评估范围需拓宽的必要性,并为未来研究提供了建议,如更好地理解智能体的神经活动与行为。