Large Vision-Language Models (VLMs) often answer classic visual illusions "correctly" on original images, yet persist with the same responses when illusion factors are inverted, even though the visual change is obvious to humans. This raises a fundamental question: do VLMs perceive visual changes or merely recall memorized patterns? While several studies have noted this phenomenon, the underlying causes remain unclear. To move from observations to systematic understanding, this paper introduces VI-Probe, a controllable visual-illusion framework with graded perturbations and matched visual controls (without illusion inducer) that disentangles visually grounded perception from language-driven recall. Unlike prior work that focuses on averaged accuracy, we measure stability and sensitivity using Polarity-Flip Consistency, Template Fixation Index, and an illusion multiplier normalized against matched controls. Experiments across different families reveal that response persistence arises from heterogeneous causes rather than a single mechanism. For instance, GPT-5 exhibits memory override, Claude-Opus-4.1 shows perception-memory competition, while Qwen variants suggest visual-processing limits. Our findings challenge single-cause views and motivate probing-based evaluation that measures both knowledge and sensitivity to controlled visual change. Data and code are available at https://sites.google.com/view/vi-probe/.
翻译:大型视觉语言模型(VLMs)在处理原始图像时往往能对经典视错觉给出“正确”答案,但当错觉诱导因素反转时,即使视觉变化对人类显而易见,模型仍会坚持相同的回答。这引发了一个根本性问题:VLMs究竟是在感知视觉变化,还是仅仅在回忆记忆中的模式?尽管已有研究注意到这一现象,但其根本原因尚不明确。为了从现象观察转向系统性理解,本文提出了VI-Probe——一个包含分级扰动与匹配视觉对照(不含错觉诱导因素)的可控视错觉框架,用以区分基于视觉的感知与语言驱动的回忆。与先前仅关注平均准确率的研究不同,我们通过极性翻转一致性、模板固着指数以及相对于匹配对照标准化的错觉乘数来度量模型的稳定性与敏感性。跨不同模型系列的实验表明,响应持续性源于异质性成因而非单一机制。例如,GPT-5表现出记忆覆盖现象,Claude-Opus-4.1显示出感知-记忆竞争,而Qwen系列变体则暗示其存在视觉处理局限。本研究挑战了单一成因的观点,并倡导采用基于探测的评估方法,以同时衡量模型的知识储备及其对受控视觉变化的敏感性。数据与代码公开于 https://sites.google.com/view/vi-probe/。