This work explores the visual capabilities and limitations of foundation models by introducing a novel adversarial attack method utilizing skeletonization to reduce the search space effectively. Our approach specifically targets images containing text, particularly mathematical formula images, which are more challenging due to their LaTeX conversion and intricate structure. We conduct a detailed evaluation of both character and semantic changes between original and adversarially perturbed outputs to provide insights into the models' visual interpretation and reasoning abilities. The effectiveness of our method is further demonstrated through its application to ChatGPT, which shows its practical implications in real-world scenarios.
翻译:本研究通过引入一种新颖的对抗性攻击方法,利用骨架化技术有效缩减搜索空间,从而探索基础模型的视觉能力与局限性。该方法专门针对包含文本的图像,特别是数学公式图像——由于需要LaTeX转换且结构复杂,这类图像更具挑战性。我们通过详细评估原始输出与对抗性扰动输出之间的字符级与语义级变化,深入揭示模型的视觉解析与推理能力。该方法在ChatGPT上的应用进一步验证了其有效性,展现了其在现实场景中的实际意义。