Decentralized Federated Learning (DFL) trains models in a collaborative and privacy-preserving manner while removing model centralization risks and improving communication bottlenecks. However, DFL faces challenges in efficient communication management and model aggregation within decentralized environments, especially with heterogeneous data distributions. Thus, this paper introduces ProFe, a novel communication optimization algorithm for DFL that combines knowledge distillation, prototype learning, and quantization techniques. ProFe utilizes knowledge from large local models to train smaller ones for aggregation, incorporates prototypes to better learn unseen classes, and applies quantization to reduce data transmitted during communication rounds. The performance of ProFe has been validated and compared to the literature by using benchmark datasets like MNIST, CIFAR10, and CIFAR100. Results showed that the proposed algorithm reduces communication costs by up to ~40-50% while maintaining or improving model performance. In addition, it adds ~20% training time due to increased complexity, generating a trade-off.
翻译:去中心化联邦学习(DFL)以协作且保护隐私的方式训练模型,同时消除了模型中心化的风险并缓解了通信瓶颈。然而,DFL在去中心化环境内,尤其是在异构数据分布下,面临着高效通信管理与模型聚合的挑战。为此,本文提出了ProFe,一种新颖的DFL通信优化算法,它结合了知识蒸馏、原型学习和量化技术。ProFe利用大型本地模型的知识来训练较小的模型以进行聚合,引入原型以更好地学习未见类别,并应用量化来减少通信轮次中传输的数据量。ProFe的性能已通过使用MNIST、CIFAR10和CIFAR100等基准数据集进行了验证,并与现有文献进行了比较。结果表明,所提算法在保持或提升模型性能的同时,将通信成本降低了约40-50%。此外,由于复杂度增加,其训练时间增加了约20%,形成了一种权衡。