Quantum Generative Adversarial Networks (qGANs) are at the forefront of image-generating quantum machine learning models. To accommodate the growing demand for Noisy Intermediate-Scale Quantum (NISQ) devices to train and infer quantum machine learning models, the number of third-party vendors offering quantum hardware as a service is expected to rise. This expansion introduces the risk of untrusted vendors potentially stealing proprietary information from the quantum machine learning models. To address this concern we propose a novel watermarking technique that exploits the noise signature embedded during the training phase of qGANs as a non-invasive watermark. The watermark is identifiable in the images generated by the qGAN allowing us to trace the specific quantum hardware used during training hence providing strong proof of ownership. To further enhance the security robustness, we propose the training of qGANs on a sequence of multiple quantum hardware, embedding a complex watermark comprising the noise signatures of all the training hardware that is difficult for adversaries to replicate. We also develop a machine learning classifier to extract this watermark robustly, thereby identifying the training hardware (or the suite of hardware) from the images generated by the qGAN validating the authenticity of the model. We note that the watermark signature is robust against inferencing on hardware different than the hardware that was used for training. We obtain watermark extraction accuracy of 100% and ~90% for training the qGAN on individual and multiple quantum hardware setups (and inferencing on different hardware), respectively. Since parameter evolution during training is strongly modulated by quantum noise, the proposed watermark can be extended to other quantum machine learning models as well.
翻译:量子生成对抗网络(qGAN)处于图像生成量子机器学习模型的前沿。为满足日益增长的中等规模含噪量子(NISQ)设备训练与推理量子机器学习模型的需求,预计提供量子硬件即服务的第三方供应商数量将增加。这种扩展带来了不可信供应商可能窃取量子机器学习模型专有信息的安全风险。针对这一问题,我们提出了一种创新的水印技术——通过在qGAN训练阶段嵌入的噪声特征作为非侵入式水印。该水印可在qGAN生成的图像中识别,从而追溯训练所用的特定量子硬件,为所有权提供强有力证明。为进一步增强安全鲁棒性,我们提出在多个量子硬件序列上训练qGAN,嵌入由所有训练硬件噪声特征构成的复杂水印,使攻击者难以复制。我们还开发了机器学习分类器以稳健提取该水印,从而通过qGAN生成图像识别训练硬件(或硬件组合),验证模型真实性。值得注意的是,该水印签名对于在异于训练硬件上进行推理的情况仍保持鲁棒性。在单量子硬件和多量子硬件设置上训练qGAN(并在不同硬件上推理)时,水印提取准确率分别达到100%和约90%。由于训练过程中参数演化受到量子噪声的强烈调制,所提出的水印技术也可扩展至其他量子机器学习模型。