Large language models (LLMs) have made significant advancements in natural language understanding. However, through that enormous semantic representation that the LLM has learnt, is it somehow possible for it to understand images as well? This work investigates this question. To enable the LLM to process images, we convert them into a representation given by Scalable Vector Graphics (SVG). To study what the LLM can do with this XML-based textual description of images, we test the LLM on three broad computer vision tasks: (i) visual reasoning and question answering, (ii) image classification under distribution shift, few-shot learning, and (iii) generating new images using visual prompting. Even though we do not naturally associate LLMs with any visual understanding capabilities, our results indicate that the LLM can often do a decent job in many of these tasks, potentially opening new avenues for research into LLMs' ability to understand image data. Our code, data, and models can be found here https://github.com/mu-cai/svg-llm.
翻译:大型语言模型(LLM)在自然语言理解领域取得了显著进展。然而,通过LLM学习到的海量语义表征,是否有可能使其同时理解图像?本研究针对这一问题展开探索。为使LLM能够处理图像,我们将其转换为可缩放矢量图形(SVG)提供的表征形式。为探究LLM如何利用这种基于XML的图像文本描述,我们在三大计算机视觉任务上对LLM进行测试:(i)视觉推理与问答,(ii)分布偏移下的图像分类与少样本学习,以及(iii)通过视觉提示生成新图像。尽管我们通常不会将LLM与视觉理解能力相关联,但实验结果表明,LLM在多项任务中均能表现出色,这可能为探索LLM理解图像数据的能力开辟新的研究路径。相关代码、数据与模型可通过https://github.com/mu-cai/svg-llm获取。