With the increasing emphasis on privacy regulations, such as GDPR, protecting individual privacy and ensuring compliance have become critical concerns for both individuals and organizations. Privacy-preserving machine learning (PPML) is an innovative approach that allows for secure data analysis while safeguarding sensitive information. It enables organizations to extract valuable insights from data without compromising privacy. Secure multi-party computation (MPC) is a key tool in PPML, as it allows multiple parties to jointly compute functions without revealing their private inputs, making it essential in multi-server environments. We address the performance overhead of existing maliciously secure protocols, particularly in finite rings like $\mathbb{Z}_{2^\ell}$, by introducing an efficient protocol for secure linear function evaluation. We implement our maliciously secure MPC protocol on GPUs, significantly improving its efficiency and scalability. We extend the protocol to handle linear and non-linear layers, ensuring compatibility with a wide range of machine-learning models. Finally, we comprehensively evaluate machine learning models by integrating our protocol into the workflow, enabling secure and efficient inference across simple and complex models, such as convolutional neural networks (CNNs).
翻译:随着GDPR等隐私法规日益受到重视,保护个人隐私与确保合规性已成为个人与组织共同关注的关键问题。隐私保护机器学习(PPML)作为一种创新方法,能够在保护敏感信息的同时实现安全的数据分析。该方法使组织能够在不损害隐私的前提下从数据中提取有价值的洞见。安全多方计算(MPC)是PPML中的核心工具,它允许多个参与方在不泄露各自私有输入的情况下联合计算函数,这在多服务器环境中尤为重要。针对现有恶意安全协议(尤其在$\mathbb{Z}_{2^\ell}$等有限环中)存在的性能开销问题,我们提出了一种用于安全线性函数计算的高效协议。我们在GPU上实现了该恶意安全MPC协议,显著提升了其效率与可扩展性。我们将该协议扩展至线性和非线性层的处理,确保其与多种机器学习模型兼容。最后,通过将协议集成至工作流,我们对机器学习模型进行了全面评估,实现了从简单模型到复杂模型(如卷积神经网络CNN)的安全高效推理。