Federated Learning (FL) has been becoming a popular interdisciplinary research area in both applied mathematics and information sciences. Mathematically, FL aims to collaboratively optimize aggregate objective functions over distributed datasets while satisfying a variety of privacy and system constraints.Different from conventional distributed optimization methods, FL needs to address several specific issues (e.g., non-i.i.d. data distributions and differential private noises), which pose a set of new challenges in the problem formulation, algorithm design, and convergence analysis. In this paper, we will systematically review existing FL optimization research including their assumptions, formulations, methods, and theoretical results. Potential future directions are also discussed.
翻译:联邦学习(FL)正逐渐成为应用数学与信息科学领域广受欢迎的交叉研究方向。从数学角度而言,联邦学习旨在满足各类隐私与系统约束的前提下,基于分布式数据集协同优化聚合目标函数。与传统分布式优化方法不同,联邦学习需要解决若干特定问题(例如非独立同分布数据与差分隐私噪声),这些问题在问题建模、算法设计和收敛性分析方面带来了全新挑战。本文将系统梳理现有联邦学习优化研究,涵盖其假设条件、建模形式、求解方法及理论成果,并对未来潜在研究方向进行探讨。