Generative images have proliferated on Web platforms in social media and online copyright distribution scenarios, and semantic watermarking has increasingly been integrated into diffusion models to support reliable provenance tracking and forgery prevention for web content. Traditional noise-layer-based watermarking, however, remains vulnerable to inversion attacks that can recover embedded signals. To mitigate this, recent content-aware semantic watermarking schemes bind watermark signals to high-level image semantics, constraining local edits that would otherwise disrupt global coherence. Yet, large language models (LLMs) possess structured reasoning capabilities that enable targeted exploration of semantic spaces, allowing locally fine-grained but globally coherent semantic alterations that invalidate such bindings. To expose this overlooked vulnerability, we introduce a Coherence-Preserving Semantic Injection (CSI) attack that leverages LLM-guided semantic manipulation under embedding-space similarity constraints. This alignment enforces visual-semantic consistency while selectively perturbing watermark-relevant semantics, ultimately inducing detector misclassification. Extensive empirical results show that CSI consistently outperforms prevailing attack baselines against content-aware semantic watermarking, revealing a fundamental security weakness of current semantic watermark designs when confronted with LLM-driven semantic perturbations.
翻译:生成式图像已在社交媒体和在线版权分发等网络平台场景中广泛传播,语义水印技术正日益融入扩散模型以支持网络内容的可靠溯源与防伪。然而,传统的基于噪声层的水印方法仍易受逆向攻击,导致嵌入信号被恢复。为应对此问题,近期基于内容感知的语义水印方案将水印信号与高层图像语义绑定,从而限制可能破坏全局连贯性的局部编辑。然而,大语言模型(LLMs)具备结构化推理能力,能够对语义空间进行定向探索,实现局部细粒度而全局连贯的语义修改,从而破坏此类绑定关系。为揭示这一被忽视的安全隐患,我们提出一种保持连贯性的语义注入(CSI)攻击方法,该方法在嵌入空间相似性约束下利用LLM引导的语义操控。这种对齐机制在保持视觉-语义一致性的同时,选择性地扰动与水印相关的语义,最终导致检测器误判。大量实验结果表明,CSI在针对内容感知语义水印的攻击中持续优于主流基线方法,揭示了当前语义水印设计在面对LLM驱动的语义扰动时存在的根本性安全缺陷。