Federated learning (FL) enables collaborative training without pooling raw data, but standard FL relies on a central coordinator, which introduces a single point of failure and concentrates trust in the orchestration infrastructure. Decentralized federated learning (DFL) removes the coordinator and replaces client-server orchestration with peer-to-peer coordination, making learning dynamics topology-dependent and reshaping the associated security, privacy, and systems trade-offs. This survey systematically reviews DFL methods from 2018 through early 2026 and organizes them into two architectural families: traditional distributed FL and blockchain-based FL. We then propose a unified, challenge-driven taxonomy that maps both families to the core bottlenecks they primarily address, and we summarize prevailing evaluation practices and their limitations, exposing gaps in the literature. Finally, we distill lessons learned and outline research directions, emphasizing topology-aware threat models, privacy notions that reflect decentralized exposure, incentive mechanisms robust to manipulation, and the need to explicitly define whether the objective is a single global model or personalized solutions in decentralized settings.
翻译:联邦学习(FL)支持无需汇集原始数据的协同训练,但标准FL依赖于中心协调器,这引入了单点故障并将信任集中于编排基础设施。去中心化联邦学习(DFL)移除了协调器,以点对点协调取代客户端-服务器编排,使得学习动态依赖于网络拓扑,并重塑了相关的安全性、隐私性与系统权衡。本综述系统回顾了2018年至2026年初的DFL方法,并将其归纳为两大架构体系:传统分布式FL与基于区块链的FL。随后,我们提出一个统一的、挑战驱动的分类框架,将两类方法映射至其着力解决的核心瓶颈,并总结了主流评估实践及其局限性,揭示了现有文献的不足。最后,我们提炼经验教训并展望研究方向,重点强调拓扑感知的威胁模型、反映去中心化暴露特性的隐私定义、抗操纵的激励机制,以及在去中心化场景中需明确定义目标是单一全局模型还是个性化解决方案的需求。