Akin to neuroplasticity in human brains, the plasticity of deep neural networks enables their quick adaption to new data. This makes plasticity particularly crucial for deep Reinforcement Learning (RL) agents: Once plasticity is lost, an agent's performance will inevitably plateau because it cannot improve its policy to account for changes in the data distribution, which are a necessary consequence of its learning process. Thus, developing well-performing and sample-efficient agents hinges on their ability to remain plastic during training. Furthermore, the loss of plasticity can be connected to many other issues plaguing deep RL, such as training instabilities, scaling failures, overestimation bias, and insufficient exploration. With this survey, we aim to provide an overview of the emerging research on plasticity loss for academics and practitioners of deep reinforcement learning. First, we propose a unified definition of plasticity loss based on recent works, relate it to definitions from the literature, and discuss metrics for measuring plasticity loss. Then, we categorize and discuss numerous possible causes of plasticity loss before reviewing currently employed mitigation strategies. Our taxonomy is the first systematic overview of the current state of the field. Lastly, we discuss prevalent issues within the literature, such as a necessity for broader evaluation, and provide recommendations for future research, like gaining a better understanding of an agent's neural activity and behavior.
翻译:类似于人类大脑的神经可塑性,深度神经网络的可塑性使其能够快速适应新数据。这使得可塑性对于深度强化学习智能体尤为关键:一旦可塑性丧失,智能体的性能将不可避免地停滞,因为它无法改进其策略以适应数据分布的变化——而这种变化是其学习过程的必然结果。因此,开发性能优异且样本高效的智能体,关键在于其在训练期间保持可塑性的能力。此外,可塑性的丧失与困扰深度强化学习的许多其他问题相关,例如训练不稳定性、扩展失败、高估偏差和探索不足。通过本综述,我们旨在为深度强化学习的研究者和实践者提供关于可塑性丧失这一新兴研究领域的概览。首先,我们基于近期研究提出可塑性丧失的统一化定义,将其与文献中的定义相关联,并讨论衡量可塑性丧失的指标。随后,我们对可塑性丧失的诸多可能成因进行分类讨论,继而综述当前采用的缓解策略。我们的分类体系是该领域当前研究现状的首个系统性概述。最后,我们探讨了文献中存在的普遍问题(例如评估范围需进一步拓宽的必要性),并为未来研究方向提出建议,包括深化对智能体神经活动与行为的理解。