In recent years, digital watermarking techniques based on deep learning have been widely studied. To achieve both imperceptibility and robustness of image watermarks, most current methods employ convolutional neural networks to build robust watermarking frameworks. However, despite the success of CNN-based watermarking models, they struggle to achieve robustness against geometric attacks due to the limitations of convolutional neural networks in capturing global and long-range relationships. To address this limitation, we propose a robust watermarking framework based on the Swin Transformer, named RoWSFormer. Specifically, we design the Locally-Channel Enhanced Swin Transformer Block as the core of both the encoder and decoder. This block utilizes the self-attention mechanism to capture global and long-range information, thereby significantly improving adaptation to geometric distortions. Additionally, we construct the Frequency-Enhanced Transformer Block to extract frequency domain information, which further strengthens the robustness of the watermarking framework. Experimental results demonstrate that our RoWSFormer surpasses existing state-of-the-art watermarking methods. For most non-geometric attacks, RoWSFormer improves the PSNR by 3 dB while maintaining the same extraction accuracy. In the case of geometric attacks (such as rotation, scaling, and affine transformations), RoWSFormer achieves over a 6 dB improvement in PSNR, with extraction accuracy exceeding 97\%.
翻译:近年来,基于深度学习的数字水印技术得到了广泛研究。为实现图像水印的不可感知性与鲁棒性,现有方法大多采用卷积神经网络构建鲁棒水印框架。然而,尽管基于CNN的水印模型取得了成功,但由于卷积神经网络在捕获全局与长程关系方面的局限性,这些模型难以有效抵抗几何攻击。为克服这一局限,我们提出了一种基于Swin Transformer的鲁棒水印框架,命名为RoWSFormer。具体而言,我们设计了局部通道增强型Swin Transformer模块作为编码器与解码器的核心单元。该模块利用自注意力机制捕获全局与长程信息,从而显著提升对几何畸变的适应能力。此外,我们构建了频率增强型Transformer模块以提取频域信息,进一步强化水印框架的鲁棒性。实验结果表明,所提出的RoWSFormer超越了现有先进水印方法。对于大多数非几何攻击,RoWSFormer在保持相同提取精度的同时将PSNR提升了3 dB;在面对几何攻击(如旋转、缩放及仿射变换)时,RoWSFormer的PSNR提升超过6 dB,且提取准确率高于97%。