The widespread deployment of products powered by machine learning models is raising concerns around data privacy and information security worldwide. To address this issue, Federated Learning was first proposed as a privacy-preserving alternative to conventional methods that allow multiple learning clients to share model knowledge without disclosing private data. A complementary approach known as Fully Homomorphic Encryption (FHE) is a quantum-safe cryptographic system that enables operations to be performed on encrypted weights. However, implementing mechanisms such as these in practice often comes with significant computational overhead and can expose potential security threats. Novel computing paradigms, such as analog, quantum, and specialized digital hardware, present opportunities for implementing privacy-preserving machine learning systems while enhancing security and mitigating performance loss. This work instantiates these ideas by applying the FHE scheme to a Federated Learning Neural Network architecture that integrates both classical and quantum layers.
翻译:机器学习模型驱动产品的广泛部署正在全球范围内引发对数据隐私和信息安全的担忧。为解决这一问题,联邦学习首次被提出作为传统方法的隐私保护替代方案,允许多个学习客户端共享模型知识而无需披露私有数据。一种被称为全同态加密的互补方法是一种量子安全的密码系统,可在加密权重上执行运算。然而,在实践中实施此类机制通常伴随着显著的计算开销,并可能暴露潜在的安全威胁。新型计算范式,如模拟计算、量子计算和专用数字硬件,为实施隐私保护机器学习系统提供了机遇,同时能增强安全性并缓解性能损失。本研究通过将全同态加密方案应用于集成经典层与量子层的联邦学习神经网络架构,具体实现了这些理念。