Privacy concerns in machine learning systems have grown significantly with the increasing reliance on sensitive user data for training large-scale models. This paper introduces a novel framework combining Probably Approximately Correct (PAC) Privacy with zero-knowledge proofs (ZKPs) to provide verifiable privacy guarantees in trustless computing environments. Our approach addresses the limitations of traditional privacy-preserving techniques by enabling users to verify both the correctness of computations and the proper application of privacy-preserving noise, particularly in cloud-based systems. We leverage non-interactive ZKP schemes to generate proofs that attest to the correct implementation of PAC privacy mechanisms while maintaining the confidentiality of proprietary systems. Our results demonstrate the feasibility of achieving verifiable PAC privacy in outsourced computation, offering a practical solution for maintaining trust in privacy-preserving machine learning and database systems while ensuring computational integrity.
翻译:随着大规模模型训练对敏感用户数据的依赖日益增加,机器学习系统中的隐私问题已显著凸显。本文提出了一种新颖的框架,将近似正确概率(PAC)隐私与零知识证明(ZKP)相结合,旨在为无信任计算环境提供可验证的隐私保障。我们的方法通过使用户能够验证计算的正确性以及隐私保护噪声(尤其在基于云的系统中)的正确应用,解决了传统隐私保护技术的局限性。我们利用非交互式ZKP方案生成证明,以证实PAC隐私机制的正确实施,同时保护专有系统的机密性。我们的研究结果表明,在外部计算中实现可验证的PAC隐私是可行的,为在确保计算完整性的同时,维护隐私保护机器学习及数据库系统的信任提供了一种实用的解决方案。