The superiority of Multi-Robot Systems (MRS) in various complex environments is unquestionable. However, in complex situations such as search and rescue, environmental monitoring, and automated production, robots are often required to work collaboratively without a central control unit. This necessitates an efficient and robust decentralized control mechanism to process local information and guide the robots' behavior. In this work, we propose a new decentralized controller design method that utilizes the Deep Q-Network (DQN) algorithm from deep reinforcement learning, aimed at improving the integration of local information and robustness of multi-robot systems. The designed controller allows each robot to make decisions independently based on its local observations while enhancing the overall system's collaborative efficiency and adaptability to dynamic environments through a shared learning mechanism. Through testing in simulated environments, we have demonstrated the effectiveness of this controller in improving task execution efficiency, strengthening system fault tolerance, and enhancing adaptability to the environment. Furthermore, we explored the impact of DQN parameter tuning on system performance, providing insights for further optimization of the controller design. Our research not only showcases the potential application of the DQN algorithm in the decentralized control of multi-robot systems but also offers a new perspective on how to enhance the overall performance and robustness of the system through the integration of local information.
翻译:多机器人系统在各种复杂环境中的优越性毋庸置疑。然而,在诸如搜救、环境监测和自动化生产等复杂场景中,机器人通常需要在没有中央控制单元的情况下协同工作。这需要一种高效且鲁棒的去中心化控制机制来处理局部信息并指导机器人的行为。本研究提出了一种新的去中心化控制器设计方法,该方法利用深度强化学习中的深度Q网络算法,旨在提升多机器人系统的局部信息整合能力与鲁棒性。所设计的控制器使每个机器人能够基于其局部观测独立做出决策,同时通过共享学习机制提升系统的整体协作效率及对动态环境的适应能力。通过在仿真环境中的测试,我们验证了该控制器在提高任务执行效率、增强系统容错能力以及提升环境适应性方面的有效性。此外,我们探究了DQN参数调优对系统性能的影响,为控制器的进一步优化提供了参考。我们的研究不仅展示了DQN算法在多机器人系统去中心化控制中的潜在应用价值,也为如何通过局部信息整合提升系统整体性能与鲁棒性提供了新的视角。