Ensuring data privacy in machine learning models is critical, particularly in distributed settings where model gradients are typically shared among multiple parties to allow collaborative learning. Motivated by the increasing success of recovering input data from the gradients of classical models, this study addresses a central question: How hard is it to recover the input data from the gradients of quantum machine learning models? Focusing on variational quantum circuits (VQC) as learning models, we uncover the crucial role played by the dynamical Lie algebra (DLA) of the VQC ansatz in determining privacy vulnerabilities. While the DLA has previously been linked to the classical simulatability and trainability of VQC models, this work, for the first time, establishes its connection to the privacy of VQC models. In particular, we show that properties conducive to the trainability of VQCs, such as a polynomial-sized DLA, also facilitate the extraction of detailed snapshots of the input. We term this a weak privacy breach, as the snapshots enable training VQC models for distinct learning tasks without direct access to the original input. Further, we investigate the conditions for a strong privacy breach where the original input data can be recovered from these snapshots by classical or quantum-assisted polynomial time methods. We establish conditions on the encoding map such as classical simulatability, overlap with DLA basis, and its Fourier frequency characteristics that enable such a privacy breach of VQC models. Our findings thus play a crucial role in detailing the prospects of quantum privacy advantage by guiding the requirements for designing quantum machine learning models that balance trainability with robust privacy protection.
翻译:确保机器学习模型中的数据隐私至关重要,特别是在分布式环境中,模型梯度通常需要在多方之间共享以实现协作学习。受从经典模型梯度中恢复输入数据日益成功的启发,本研究聚焦一个核心问题:从量子机器学习模型的梯度中恢复输入数据的难度有多大?以变分量子电路(VQC)作为学习模型,我们揭示了VQC拟设的动力李代数(DLA)在决定隐私漏洞中的关键作用。尽管此前DLA已被关联至VQC模型的经典可模拟性与可训练性,本研究首次建立了其与VQC模型隐私性的联系。具体而言,我们表明有利于VQC可训练性的特性(如多项式规模的DLA)同样会促进输入数据详细快照的提取。我们将此定义为弱隐私泄露,因为这些快照使得在不直接访问原始输入的情况下,能够针对不同学习任务训练VQC模型。此外,我们研究了强隐私泄露的条件,即通过经典或量子辅助的多项式时间方法从这些快照中恢复原始输入数据。我们建立了编码映射的条件(如经典可模拟性、与DLA基的重叠度及其傅里叶频率特征),这些条件可导致VQC模型的此类隐私泄露。因此,我们的研究通过指导设计兼顾可训练性与稳健隐私保护的量子机器学习模型的需求,在阐述量子隐私优势的前景方面发挥了关键作用。