Understanding visual semantics embedded in consecutive characters is a crucial capability for both large language models (LLMs) and multi-modal large language models (MLLMs). This type of artifact possesses the unique characteristic that identical information can be readily formulated in both texts and images, making them a significant proxy for analyzing modern LLMs' and MLLMs' capabilities in modality-agnostic vision understanding. In this work, we select ASCII art as a representative artifact, where the lines and brightness used to depict each concept are rendered by characters, and we frame the problem as an ASCII art recognition task. We benchmark model performance on this task by constructing an evaluation dataset with an elaborate categorization tree and also collect a training set to elicit the models' visual perception ability. Through a comprehensive analysis of dozens of models, results reveal that although humans can achieve nearly 100% accuracy, the state-of-the-art LLMs and MLLMs lag far behind. Models are capable of recognizing concepts depicted in the ASCII arts given only text inputs indicated by over 60% accuracy for some concepts, but most of them achieves merely around 30% accuracy when averaged across all categories. When provided with images as inputs, GPT-4o gets 82.68%, outperforming the strongest open-source MLLM by 21.95%. Although models favor different kinds of ASCII art depending on the modality provided, none of the MLLMs successfully benefit when both modalities are supplied simultaneously. Moreover, supervised fine-tuning helps improve models' accuracy especially when provided with the image modality, but also highlights the need for better training techniques to enhance the information fusion among modalities.
翻译:理解连续字符中蕴含的视觉语义,对于大语言模型(LLMs)与多模态大语言模型(MLLMs)而言均是一项关键能力。此类信息载体具有一项独特属性:相同信息可轻易以文本与图像两种形式呈现,使其成为分析现代LLMs与MLLMs在模态无关视觉理解能力方面的重要媒介。本研究选取ASCII艺术作为代表性载体——其通过字符线条与明暗来描绘概念,并将该问题构建为ASCII艺术识别任务。我们通过构建具有精细分类树的评估数据集来建立模型性能基准,同时收集训练集以激发模型的视觉感知能力。通过对数十个模型的综合分析,结果表明:尽管人类能达到近100%的准确率,当前最先进的LLMs与MLLMs仍存在显著差距。在仅提供文本输入时,模型对部分概念(准确率超60%)的ASCII艺术图像具备识别能力,但所有类别平均准确率大多仅维持在30%左右。当提供图像输入时,GPT-4o达到82.68%的准确率,超越最强开源MLLM达21.95%。尽管模型会因输入模态类型而对不同ASCII艺术产生偏好,但所有MLLMs在同时接收双模态输入时均未成功实现性能增益。此外,监督微调虽能提升模型在图像模态下的准确率,但也凸显出需要更优的训练技术以增强多模态间信息融合的必要性。