In response to the limitations of reinforcement learning and evolutionary algorithms (EAs) in complex problem-solving, Evolutionary Reinforcement Learning (EvoRL) has emerged as a synergistic solution. EvoRL integrates EAs and reinforcement learning, presenting a promising avenue for training intelligent agents. This systematic review firstly navigates through the technological background of EvoRL, examining the symbiotic relationship between EAs and reinforcement learning algorithms. We then delve into the challenges faced by both EAs and reinforcement learning, exploring their interplay and impact on the efficacy of EvoRL. Furthermore, the review underscores the need for addressing open issues related to scalability, adaptability, sample efficiency, adversarial robustness, ethic and fairness within the current landscape of EvoRL. Finally, we propose future directions for EvoRL, emphasizing research avenues that strive to enhance self-adaptation and self-improvement, generalization, interpretability, explainability, and so on. Serving as a comprehensive resource for researchers and practitioners, this systematic review provides insights into the current state of EvoRL and offers a guide for advancing its capabilities in the ever-evolving landscape of artificial intelligence.
翻译:针对强化学习与进化算法在复杂问题求解中的局限性,进化强化学习(EvoRL)作为一种协同解决方案应运而生。EvoRL整合了进化算法与强化学习,为训练智能体提供了富有前景的研究路径。本系统综述首先梳理EvoRL的技术背景,剖析进化算法与强化学习算法之间的共生关系;继而深入探讨两类方法各自面临的挑战,分析其相互影响及对EvoRL效能的制约。此外,综述特别强调当前EvoRL研究格局中需解决的可扩展性、适应性、样本效率、对抗鲁棒性、伦理与公平性等开放问题。最后,我们提出EvoRL的未来发展方向,重点关注提升自适应与自我改进能力、泛化能力、可解释性与可阐释性等研究路径。作为面向研究者与实践者的综合性参考资源,本系统综述系统揭示了EvoRL的研究现状,并为在人工智能持续演进背景下推动其能力发展提供指引。