Box-free model watermarking is an emerging technique to safeguard the intellectual property of deep learning models, particularly those for low-level image processing tasks. Existing works have verified and improved its effectiveness in several aspects. However, in this paper, we reveal that box-free model watermarking is prone to removal attacks, even under the real-world threat model such that the protected model and the watermark extractor are in black boxes. Under this setting, we carry out three studies. 1) We develop an extractor-gradient-guided (EGG) remover and show its effectiveness when the extractor uses ReLU activation only. 2) More generally, for an unknown extractor, we leverage adversarial attacks and design the EGG remover based on the estimated gradients. 3) Under the most stringent condition that the extractor is inaccessible, we design a transferable remover based on a set of private proxy models. In all cases, the proposed removers can successfully remove embedded watermarks while preserving the quality of the processed images, and we also demonstrate that the EGG remover can even replace the watermarks. Extensive experimental results verify the effectiveness and generalizability of the proposed attacks, revealing the vulnerabilities of the existing box-free methods and calling for further research.
翻译:无盒模型水印是一种新兴技术,用于保护深度学习模型的知识产权,特别是针对低级图像处理任务的模型。现有研究已从多个方面验证并提升了其有效性。然而,本文揭示无盒模型水印即使在现实威胁模型下(被保护模型和水印提取器均为黑盒)也易受移除攻击。在此设定下,我们开展三项研究:1)开发了一种基于提取器梯度引导(EGG)的移除器,并证明其在提取器仅使用ReLU激活函数时的有效性;2)更一般地,针对未知提取器,我们利用对抗攻击设计基于估计梯度的EGG移除器;3)在最严格条件(提取器不可访问)下,我们基于一组私有代理模型设计了可迁移的移除器。在所有情况下,所提移除器均能成功移除嵌入水印,同时保持处理图像质量,并且我们进一步证明EGG移除器甚至可替换水印。大量实验验证了所提攻击的有效性和泛化能力,揭示了现有无盒方法的脆弱性,呼吁进一步研究。