Robust machine learning (ML) models can be developed by leveraging large volumes of data and distributing the computational tasks across numerous devices or servers. Federated learning (FL) is a technique in the realm of ML that facilitates this goal by utilizing cloud infrastructure to enable collaborative model training among a network of decentralized devices. Beyond distributing the computational load, FL targets the resolution of privacy issues and the reduction of communication costs simultaneously. To protect user privacy, FL requires users to send model updates rather than transmitting large quantities of raw and potentially confidential data. Specifically, individuals train ML models locally using their own data and then upload the results in the form of weights and gradients to the cloud for aggregation into the global model. This strategy is also advantageous in environments with limited bandwidth or high communication costs, as it prevents the transmission of large data volumes. With the increasing volume of data and rising privacy concerns, alongside the emergence of large-scale ML models like Large Language Models (LLMs), FL presents itself as a timely and relevant solution. It is therefore essential to review current FL algorithms to guide future research that meets the rapidly evolving ML demands. This survey provides a comprehensive analysis and comparison of the most recent FL algorithms, evaluating them on various fronts including mathematical frameworks, privacy protection, resource allocation, and applications. Beyond summarizing existing FL methods, this survey identifies potential gaps, open areas, and future challenges based on the performance reports and algorithms used in recent studies. This survey enables researchers to readily identify existing limitations in the FL field for further exploration.
翻译:通过利用海量数据并将计算任务分布到众多设备或服务器上,可以开发出鲁棒的机器学习模型。联邦学习是机器学习领域中的一种技术,它借助云基础设施,促进去中心化设备网络之间的协同模型训练,从而实现这一目标。除了分配计算负载外,联邦学习还旨在同时解决隐私问题并降低通信成本。为了保护用户隐私,联邦学习要求用户发送模型更新,而非传输大量原始且可能敏感的数据。具体而言,个体使用自己的数据在本地训练机器学习模型,然后将结果以权重和梯度的形式上传至云端,聚合成全局模型。这一策略在带宽有限或通信成本高昂的环境中也具有优势,因为它避免了大规模数据的传输。随着数据量的不断增长和隐私问题的日益突出,以及像大语言模型这样的大规模机器学习模型的出现,联邦学习成为一种及时且相关的解决方案。因此,有必要回顾当前的联邦学习算法,以指导满足快速演进的机器学习需求的未来研究。本综述对最新的联邦学习算法进行了全面的分析与比较,从数学框架、隐私保护、资源分配和应用等多个方面对其进行了评估。除了总结现有的联邦学习方法,本综述还基于近期研究中的性能报告和所用算法,指出了潜在的空白领域、开放性问题以及未来挑战。本综述使研究人员能够轻松识别联邦学习领域中现有的局限性,以供进一步探索。