Backdoor watermarking has emerged as the predominant approach for protecting public datasets, enabling dataset ownership verification (DOV) through embedded triggers that induce predefined model behaviors. While existing works assume that DOV results can serve as reliable evidence for copyright infringement claims, we argue that this assumption is fundamentally flawed. In this paper, we expose critical vulnerabilities in current backdoor watermarking schemes by demonstrating that attackers can forge watermarks that are statistically indistinguishable from the original ones, thereby evading infringement allegations. Specifically, we propose a Forged Watermark Generator (FW-Gen), a lightweight variational autoencoder-based framework that generates forged watermarks preserving the statistical properties of original watermarks while exhibiting distinct visual patterns. Our attack operates under a realistic threat model where an accused attacker, upon receiving an infringement claim, extracts watermark information from the protected dataset and produces counterfeit evidence to refute the allegation. Extensive experiments across six backdoor watermarking methods, two benchmark datasets, and two model architectures demonstrate that forged watermarks achieve equivalent or superior statistical significance in hypothesis testing compared to original watermarks. These findings reveal that current DOV mechanisms are insufficient as standalone evidence for copyright disputes and call for more robust dataset protection schemes.
翻译:暂无翻译