With the blossom of deep learning models and services, it has become an imperative concern to safeguard the valuable model parameters from being stolen. Watermarking is considered an important tool for ownership verification. However, current watermarking schemes are customized for different models and tasks, hard to be integrated as an integrated intellectual protection service. We propose Hufu, a modality-agnostic watermarking system for pre-trained Transformer-based models, relying on the permutation equivariance property of Transformers. Hufu embeds watermark by fine-tuning the pre-trained model on a set of data samples specifically permuted, and the embedded model essentially contains two sets of weights -- one for normal use and the other for watermark extraction which is triggered on permuted inputs. The permutation equivariance ensures minimal interference between these two sets of model weights and thus high fidelity on downstream tasks. Since our method only depends on the model itself, it is naturally modality-agnostic, task-independent, and trigger-sample-free. Extensive experiments on the state-of-the-art vision Transformers, BERT, and GPT2 have demonstrated Hufu's superiority in meeting watermarking requirements including effectiveness, efficiency, fidelity, and robustness, showing its great potential to be deployed as a uniform ownership verification service for various Transformers.
翻译:随着深度学习模型与服务的蓬勃发展,保护宝贵模型参数免遭窃取已成为当务之急。水印技术被视为所有权验证的重要工具。然而,当前水印方案因模型与任务差异而定制化严重,难以整合为统一的知识产权保护服务。本文提出虎符——一种基于Transformer置换等变性的模态无关水印系统。该系统利用Transformer的排列等变性,通过对特定置换后的数据样本进行微调来嵌入水印:嵌入水印的模型实质上包含两组权重——一组用于常规推理,另一组在接收到置换后的输入时触发水印提取。置换等变性确保了这两组模型权重之间的最小干扰,从而在下游任务中保持高保真度。由于该方法仅依赖模型自身特性,天然具备模态无关性、任务独立性和免触发样本特性。在视觉Transformer、BERT和GPT2等前沿模型上的大量实验表明,虎符在水印有效性、效率、保真度和鲁棒性方面均具优越性,展现出作为各类Transformer统一所有权验证服务的巨大潜力。