Decentralized Multi-agent Learning (DML) enables collaborative model training while preserving data privacy. However, inherent heterogeneity in agents' resources (computation, communication, and task size) may lead to substantial variations in training time. This heterogeneity creates a bottleneck, lengthening the overall training time due to straggler effects and potentially wasting spare resources of faster agents. To minimize training time in heterogeneous environments, we present a Communication-Efficient Training Workload Balancing for Decentralized Multi-Agent Learning (ComDML), which balances the workload among agents through a decentralized approach. Leveraging local-loss split training, ComDML enables parallel updates, where slower agents offload part of their workload to faster agents. To minimize the overall training time, ComDML optimizes the workload balancing by jointly considering the communication and computation capacities of agents, which hinges upon integer programming. A dynamic decentralized pairing scheduler is developed to efficiently pair agents and determine optimal offloading amounts. We prove that in ComDML, both slower and faster agents' models converge, for convex and non-convex functions. Furthermore, extensive experimental results on popular datasets (CIFAR-10, CIFAR-100, and CINIC-10) and their non-I.I.D. variants, with large models such as ResNet-56 and ResNet-110, demonstrate that ComDML can significantly reduce the overall training time while maintaining model accuracy, compared to state-of-the-art methods. ComDML demonstrates robustness in heterogeneous environments, and privacy measures can be seamlessly integrated for enhanced data protection.
翻译:去中心化多智能体学习(DML)能够在保护数据隐私的同时实现协同模型训练。然而,智能体在资源(计算能力、通信能力和任务规模)上固有的异质性可能导致训练时间出现显著差异。这种异质性会造成瓶颈,因掉队者效应延长整体训练时间,并可能浪费快速智能体的闲置资源。为最小化异质环境下的训练时间,我们提出一种面向去中心化多智能体学习的通信高效训练负载均衡方法(ComDML),通过去中心化方式平衡智能体间的训练负载。ComDML利用局部损失分割训练实现并行更新,其中慢速智能体将其部分负载卸载至快速智能体。为最小化整体训练时间,ComDML通过联合考虑智能体的计算与通信能力来优化负载均衡,该问题可归结为整数规划。我们设计了一种动态去中心化配对调度器,以高效配对智能体并确定最优卸载量。我们证明,在ComDML中,对于凸函数和非凸函数,慢速与快速智能体的模型均能收敛。此外,在CIFAR-10、CIFAR-100、CINIC-10等经典数据集及其非独立同分布变体上,使用ResNet-56和ResNet-110等大型模型进行的大量实验结果表明,与现有最优方法相比,ComDML能在保持模型精度的同时显著缩短整体训练时间。ComDML在异质环境中展现出鲁棒性,并可无缝集成隐私保护措施以增强数据安全。