In this paper, we explore how to optimize task allocation for robot swarms in dynamic environments, emphasizing the necessity of formulating robust, flexible, and scalable strategies for robot cooperation. We introduce a novel framework using a decentralized partially observable Markov decision process (Dec_POMDP), specifically designed for distributed robot swarm networks. At the core of our methodology is the Local Information Aggregation Multi-Agent Deep Deterministic Policy Gradient (LIA_MADDPG) algorithm, which merges centralized training with distributed execution (CTDE). During the centralized training phase, a local information aggregation (LIA) module is meticulously designed to gather critical data from neighboring robots, enhancing decision-making efficiency. In the distributed execution phase, a strategy improvement method is proposed to dynamically adjust task allocation based on changing and partially observable environmental conditions. Our empirical evaluations show that the LIA module can be seamlessly integrated into various CTDE-based MARL methods, significantly enhancing their performance. Additionally, by comparing LIA_MADDPG with six conventional reinforcement learning algorithms and a heuristic algorithm, we demonstrate its superior scalability, rapid adaptation to environmental changes, and ability to maintain both stability and convergence speed. These results underscore LIA_MADDPG's outstanding performance and its potential to significantly improve dynamic task allocation in robot swarms through enhanced local collaboration and adaptive strategy execution.
翻译:本文探讨了如何在动态环境中优化机器人集群的任务分配,重点阐述了制定鲁棒、灵活且可扩展的机器人协作策略的必要性。我们引入了一种基于分散式部分可观测马尔可夫决策过程(Dec_POMDP)的新型框架,该框架专为分布式机器人集群网络设计。我们方法的核心是局部信息聚合多智能体深度确定性策略梯度(LIA_MADDPG)算法,该算法融合了集中式训练与分布式执行(CTDE)范式。在集中式训练阶段,我们精心设计了局部信息聚合(LIA)模块,用于从邻近机器人收集关键数据,从而提升决策效率。在分布式执行阶段,我们提出了一种策略改进方法,能够基于动态变化且部分可观测的环境条件来动态调整任务分配。实验评估表明,LIA模块能够无缝集成到多种基于CTDE的多智能体强化学习方法中,并显著提升其性能。此外,通过将LIA_MADDPG与六种传统强化学习算法及一种启发式算法进行比较,我们证明了其卓越的可扩展性、对环境变化的快速适应能力,以及保持稳定性和收敛速度的能力。这些结果凸显了LIA_MADDPG的优异性能及其通过增强局部协作与自适应策略执行来显著改进机器人集群动态任务分配的潜力。