Vision-Language Models (VLMs) building upon the foundation of powerful large language models have made rapid progress in reasoning across visual and textual data. While VLMs perform well on vision tasks that they are trained on, our results highlight key challenges in abstract pattern recognition. We present GlyphPattern, a 954 item dataset that pairs 318 human-written descriptions of visual patterns from 40 writing systems with three visual presentation styles. GlyphPattern evaluates abstract pattern recognition in VLMs, requiring models to understand and judge natural language descriptions of visual patterns. GlyphPattern patterns are drawn from a large-scale cognitive science investigation of human writing systems; as a result, they are rich in spatial reference and compositionality. Our experiments show that GlyphPattern is challenging for state-of-the-art VLMs (GPT-4o achieves only 55% accuracy), with marginal gains from few-shot prompting. Our detailed error analysis reveals challenges at multiple levels, including visual processing, natural language understanding, and pattern generalization.
翻译:基于强大大型语言模型构建的视觉语言模型在跨视觉与文本数据的推理方面取得了快速进展。尽管视觉语言模型在其训练过的视觉任务上表现良好,但我们的研究结果揭示了其在抽象模式识别方面面临的关键挑战。本文提出GlyphPattern——一个包含954个测试项的数据集,该数据集将来自40种书写体系的318个人工撰写的视觉模式描述与三种视觉呈现风格进行配对。GlyphPattern用于评估视觉语言模型的抽象模式识别能力,要求模型理解并判断关于视觉模式的自然语言描述。GlyphPattern中的模式源自一项关于人类书写系统的大规模认知科学研究,因此具有丰富的空间参照性和组合性特征。实验表明,GlyphPattern对当前最先进的视觉语言模型具有挑战性(GPT-4o仅达到55%准确率),且少样本提示带来的性能提升有限。详细的错误分析揭示了模型在视觉处理、自然语言理解和模式泛化等多个层面存在的困难。