Federated learning is an efficient machine learning tool for dealing with heterogeneous big data and privacy protection. Federated learning methods with regularization can control the level of communications between the central and local machines. Stochastic gradient descent is often used for implementing such methods on heterogeneous big data, to reduce the communication costs. In this paper, we consider such an algorithm called Loopless Local Gradient Descent which has advantages in reducing the expected communications by controlling a probability level. We improve the method by allowing flexible step sizes and carry out novel analysis for the convergence of the algorithm in a non-convex setting in addition to the standard strongly convex setting. In the non-convex setting, we derive rates of convergence when the smooth objective function satisfies a Polyak-{\L}ojasiewicz condition. When the objective function is strongly convex, a sufficient and necessary condition for the convergence in expectation is presented.
翻译:联邦学习是一种处理异构大数据与隐私保护的高效机器学习工具。带有正则化的联邦学习方法能够控制中央服务器与本地机器之间的通信层级。随机梯度下降常被用于在异构大数据上实现此类方法,以降低通信成本。本文研究一种名为Loopless Local Gradient Descent的算法,该算法通过控制概率水平在降低期望通信次数方面具有优势。我们通过引入灵活的步长改进了该方法,并在标准强凸设定之外,对算法在非凸设定下的收敛性进行了创新性分析。在非凸设定下,我们推导了当光滑目标函数满足Polyak-Łojasiewicz条件时的收敛速率。当目标函数强凸时,我们给出了算法依期望收敛的充分必要条件。