Federated learning (FL) enables multiple participants to collaboratively train machine learning models while ensuring their data remains private and secure. Blockchain technology further enhances FL by providing stronger security, a transparent audit trail, and protection against data tampering and model manipulation. Most blockchain-secured FL systems rely on conventional consensus mechanisms: Proof-of-Work (PoW) is computationally expensive, while Proof-of-Stake (PoS) improves energy efficiency but risks centralization as it inherently favors participants with larger stakes. Recently, learning-based consensus has emerged as an alternative by replacing cryptographic tasks with model training to save energy. However, this approach introduces potential privacy vulnerabilities, as the training process may inadvertently expose sensitive information through gradient sharing and model updates. To address these challenges, we propose a novel Zero-Knowledge Proof of Training (ZKPoT) consensus mechanism. This method leverages the zero-knowledge succinct non-interactive argument of knowledge proof (zk-SNARK) protocol to validate participants' contributions based on their model performance, effectively eliminating the inefficiencies of traditional consensus methods and mitigating the privacy risks posed by learning-based consensus. We analyze our system's security, demonstrating its capacity to prevent the disclosure of sensitive information about local models or training data to untrusted parties during the entire FL process. Extensive experiments demonstrate that our system is robust against privacy and Byzantine attacks while maintaining accuracy and utility without trade-offs, scalable across various blockchain settings, and efficient in both computation and communication.
翻译:联邦学习(FL)使得多个参与方能够在确保其数据保持私密和安全的前提下,协同训练机器学习模型。区块链技术通过提供更强的安全性、透明的审计追踪以及对数据篡改和模型操纵的防护,进一步增强了联邦学习。大多数基于区块链安全的联邦学习系统依赖于传统的共识机制:工作量证明(PoW)计算成本高昂,而权益证明(PoS)提高了能效,但因其本质上有利于拥有更大权益的参与方而存在中心化风险。最近,基于学习的共识作为一种替代方案出现,它通过用模型训练替代密码学任务来节省能源。然而,这种方法引入了潜在的隐私漏洞,因为训练过程可能通过梯度共享和模型更新无意中暴露敏感信息。为应对这些挑战,我们提出了一种新颖的零知识训练证明(ZKPoT)共识机制。该方法利用零知识简洁非交互式知识论证证明(zk-SNARK)协议,根据参与方的模型性能来验证其贡献,有效消除了传统共识方法的低效性,并减轻了基于学习的共识带来的隐私风险。我们分析了系统的安全性,证明了其在联邦学习的整个过程中,能够防止将本地模型或训练数据的敏感信息泄露给不可信方。大量实验表明,我们的系统能够稳健地抵御隐私攻击和拜占庭攻击,同时在保持准确性和实用性方面无需权衡取舍,可在各种区块链设置中扩展,并且在计算和通信方面均高效。