Visual Language Models (VLMs) have become foundational models for document understanding tasks, widely used in the processing of complex multimodal documents across domains such as finance, law, and academia. However, documents often contain noise-like information, such as watermarks, which inevitably leads us to inquire: \emph{Do watermarks degrade the performance of VLMs in document understanding?} To address this, we propose a novel evaluation framework to investigate the effect of visible watermarks on VLMs performance. We takes into account various factors, including different types of document data, the positions of watermarks within documents and variations in watermark content. Our experimental results reveal that VLMs performance can be significantly compromised by watermarks, with performance drop rates reaching up to 36\%. We discover that \emph{scattered} watermarks cause stronger interference than centralized ones, and that \emph{semantic contents} in watermarks creates greater disruption than simple visual occlusion. Through attention mechanism analysis and embedding similarity examination, we find that the performance drops are mainly attributed to that watermarks 1) force widespread attention redistribution, and 2) alter semantic representation in the embedding space. Our research not only highlights significant challenges in deploying VLMs for document understanding, but also provides insights towards developing robust inference mechanisms on watermarked documents.
翻译:视觉语言模型(VLMs)已成为文档理解任务的基础模型,广泛应用于金融、法律、学术等领域的复杂多模态文档处理中。然而,文档中常包含类似噪声的信息,如水印,这不可避免地引发我们思考:*水印是否会降低视觉语言模型在文档理解中的性能?* 为此,我们提出了一种新颖的评估框架,以研究可见水印对视觉语言模型性能的影响。我们考虑了多种因素,包括不同类型的文档数据、水印在文档中的位置以及水印内容的变化。实验结果表明,水印会显著损害视觉语言模型的性能,性能下降率最高可达36%。我们发现,*分散式*水印比集中式水印造成更强的干扰,且水印中的*语义内容*比简单的视觉遮挡产生更大的破坏。通过注意力机制分析和嵌入相似性检验,我们发现性能下降主要归因于水印:1)迫使注意力广泛重新分配,2)改变嵌入空间中的语义表示。本研究不仅突显了在文档理解中部署视觉语言模型所面临的重要挑战,也为开发针对含水印文档的鲁棒推理机制提供了启示。