Decentralized Federated Learning (DFL) eliminates the need for a central aggregator, but it can expose communication patterns that reveal participant identities. This work presents UnlinkableDFL, a DFL framework that combines a peer-based mixnet with fragment-based model aggregation to ensure unlinkability in fully decentralized settings. Model updates are divided into encrypted fragments, sent over separate multi-hop paths, and aggregated without using any identity information. A theoretical analysis indicates that relay and end-to-end unlinkability improve with larger mixing sets and longer paths, while convergence remains similar to standard FedAvg. A prototype implementation evaluates learning performance, latency, unlinkability, and resource usage. The results show that UnlinkableDFL converges reliably and adapts to node churn. Communication latency emerges as the main overhead, while memory and CPU usage stay moderate. These findings illustrate the balance between anonymity and system efficiency, demonstrating that strong unlinkability can be maintained in decentralized learning workflows.
翻译:去中心化联邦学习(DFL)消除了对中心聚合器的依赖,但其通信模式可能暴露参与者身份。本文提出UnlinkableDFL,该框架将基于对等节点的混合网络与基于分片的模型聚合相结合,确保完全去中心化环境下的不可关联性。模型更新被分割为加密分片,通过独立的多跳路径传输,并在不依赖任何身份信息的情况下完成聚合。理论分析表明,中继不可关联性与端到端不可关联性随混合集规模与路径长度的增加而提升,同时收敛性能与标准FedAvg保持相近。通过原型系统评估了学习性能、时延、不可关联性及资源使用情况。结果表明UnlinkableDFL能够可靠收敛并适应节点动态变化。通信时延是主要开销,而内存与CPU占用保持适中。这些发现揭示了匿名性与系统效率间的平衡关系,证明了在去中心化学习工作流中可维持强不可关联性。