High text recognition performance does not guarantee that Vision-Language Models (VLMs) share human-like decision patterns when resolving ambiguity. We investigate this behavioral gap by directly comparing humans and VLMs using continuously interpolated Japanese character shapes generated via a $β$-VAE. We estimate decision boundaries in a single-character recognition (shape-only task) and evaluate whether VLM responses align with human judgments under shape in context (i.e., embedding an ambiguous character near the human decision boundary in word-level context). We find that human and VLM decision boundaries differ in the shape-only task, and that shape in context can improve human alignment in some conditions. These results highlight qualitative behavioral differences, offering foundational insights toward human--VLM alignment benchmarking.
翻译:高文本识别性能并不能保证视觉语言模型(VLMs)在解决模糊性时具有类人的决策模式。我们通过直接比较人类与VLMs的行为差异展开研究,利用$β$-VAE生成的连续插值日语字符形状进行分析。我们估算了单字符识别(仅形状任务)中的决策边界,并评估了在形状置于语境中(即将模糊字符嵌入接近人类决策边界的词语级语境)时,VLM的响应是否与人类判断一致。研究发现,在仅形状任务中,人类与VLM的决策边界存在差异;而在某些条件下,语境中的形状能够提升VLM与人类判断的一致性。这些结果揭示了二者在行为上的质性差异,为人类—VLM对齐基准测试提供了基础性见解。