Decentralized optimization has become a standard paradigm for solving large-scale decision-making problems and training large machine learning models without centralizing data. However, this paradigm introduces new privacy and security risks, with malicious agents potentially able to infer private data or impair the model accuracy. Over the past decade, significant advancements have been made in developing secure decentralized optimization and learning frameworks and algorithms. This survey provides a comprehensive tutorial on these advancements. We begin with the fundamentals of decentralized optimization and learning, highlighting centralized aggregation and distributed consensus as key modules exposed to security risks in federated and distributed optimization, respectively. Next, we focus on privacy-preserving algorithms, detailing three cryptographic tools and their integration into decentralized optimization and learning systems. Additionally, we examine resilient algorithms, exploring the design and analysis of resilient aggregation and consensus protocols that support these systems. We conclude the survey by discussing current trends and potential future directions.
翻译:去中心化优化已成为解决大规模决策问题与训练大型机器学习模型的标准范式,无需集中数据。然而,该范式引入了新的隐私与安全风险,恶意代理可能推断私有数据或损害模型精度。过去十年间,安全去中心化优化与学习框架及算法的研究取得了显著进展。本综述系统性地梳理了这些进展。首先介绍去中心化优化与学习的基础原理,指出集中式聚合与分布式共识分别是联邦优化与分布式优化中面临安全风险的关键模块。随后聚焦隐私保护算法,详述三种密码学工具及其在去中心化优化与学习系统中的集成方法。此外,本文探讨了鲁棒性算法,深入分析了支持此类系统的鲁棒聚合与共识协议的设计原理。最后,综述总结了当前研究趋势并展望了未来发展方向。