This work aims at improving the energy efficiency of decentralized learning by optimizing the mixing matrix, which controls the communication demands during the learning process. Through rigorous analysis based on a state-of-the-art decentralized learning algorithm, the problem is formulated as a bi-level optimization, with the lower level solved by graph sparsification. A solution with guaranteed performance is proposed for the special case of fully-connected base topology and a greedy heuristic is proposed for the general case. Simulations based on real topology and dataset show that the proposed solution can lower the energy consumption at the busiest node by 54%-76% while maintaining the quality of the trained model.
翻译:本研究旨在通过优化混合矩阵来提高去中心化学习的能量效率,该矩阵控制学习过程中的通信需求。基于最先进的去中心化学习算法进行严格分析,该问题被构建为双层优化问题,其中下层通过图稀疏化求解。针对全连接基础拓扑的特殊情况,提出了具有性能保证的解决方案;针对一般情况,提出了贪心启发式算法。基于真实拓扑与数据集的仿真表明,所提方案在保持训练模型质量的同时,可将最繁忙节点的能耗降低54%-76%。