Federated learning (FL) enables collaborative model training while preserving data privacy, yet both centralized and decentralized approaches face challenges in scalability, security, and update validation. We propose ZK-HybridFL, a secure decentralized FL framework that integrates a directed acyclic graph (DAG) ledger with dedicated sidechains and zero-knowledge proofs (ZKPs) for privacy-preserving model validation. The framework uses event-driven smart contracts and an oracle-assisted sidechain to verify local model updates without exposing sensitive data. A built-in challenge mechanism efficiently detects adversarial behavior. In experiments on image classification and language modeling tasks, ZK-HybridFL achieves faster convergence, higher accuracy, lower perplexity, and reduced latency compared to Blade-FL and ChainFL. It remains robust against substantial fractions of adversarial and idle nodes, supports sub-second on-chain verification with efficient gas usage, and prevents invalid updates and orphanage-style attacks. This makes ZK-HybridFL a scalable and secure solution for decentralized FL across diverse environments.
翻译:联邦学习(FL)能够在保护数据隐私的同时实现协同模型训练,然而集中式与去中心化方法均面临可扩展性、安全性与更新验证方面的挑战。我们提出ZK-HybridFL——一种安全去中心化联邦学习框架,其通过集成有向无环图(DAG)账本、专用侧链及零知识证明(ZKPs)来实现隐私保护的模型验证。该框架采用事件驱动的智能合约与预言机辅助侧链,在不暴露敏感数据的前提下验证本地模型更新。内置的挑战机制可高效检测对抗行为。在图像分类与语言建模任务的实验中,相较于Blade-FL与ChainFL,ZK-HybridFL实现了更快的收敛速度、更高的准确率、更低的困惑度以及更低的延迟。该系统在面对大量对抗节点与空闲节点时仍保持鲁棒性,支持亚秒级链上验证与高效gas消耗,并能有效阻止无效更新与孤儿式攻击。这些特性使得ZK-HybridFL成为适用于多样化环境的可扩展、安全的去中心化联邦学习解决方案。