Distributed optimization is fundamental to modern machine learning applications like federated learning, but existing methods often struggle with ill-conditioned problems and face stability-versus-speed tradeoffs. We introduce fractional order distributed optimization (FrODO); a theoretically-grounded framework that incorporates fractional-order memory terms to enhance convergence properties in challenging optimization landscapes. Our approach achieves provable linear convergence for any strongly connected network. Through empirical validation, our results suggest that FrODO achieves up to 4 times faster convergence versus baselines on ill-conditioned problems and 2-3 times speedup in federated neural network training, while maintaining stability and theoretical guarantees.
翻译:分布式优化是现代机器学习应用(如联邦学习)的基础,但现有方法常受困于病态问题,并面临稳定性与速度之间的权衡。本文提出分数阶分布式优化(FrODO)——一个理论完备的框架,通过引入分数阶记忆项来增强在复杂优化场景中的收敛特性。我们的方法可在任意强连通网络上实现可证明的线性收敛。实证结果表明:在病态问题上,FrODO相比基线方法可获得高达4倍的收敛加速;在联邦神经网络训练中可实现2-3倍的速度提升,同时保持稳定性与理论保证。