Understanding spatial relations is a crucial cognitive ability for both humans and AI. While current research has predominantly focused on the benchmarking of text-to-image (T2I) models, we propose a more comprehensive evaluation that includes \textit{both} T2I and Large Language Models (LLMs). As spatial relations are naturally understood in a visuo-spatial manner, we develop an approach to convert LLM outputs into an image, thereby allowing us to evaluate both T2I models and LLMs \textit{visually}. We examined the spatial relation understanding of 8 prominent generative models (3 T2I models and 5 LLMs) on a set of 10 common prepositions, as well as assess the feasibility of automatic evaluation methods. Surprisingly, we found that T2I models only achieve subpar performance despite their impressive general image-generation abilities. Even more surprisingly, our results show that LLMs are significantly more accurate than T2I models in generating spatial relations, despite being primarily trained on textual data. We examined reasons for model failures and highlight gaps that can be filled to enable more spatially faithful generations.
翻译:理解空间关系是人类与人工智能的关键认知能力。当前研究主要集中于文本到图像(T2I)模型的基准测试,而我们提出了一种更全面的评估框架,同时涵盖T2I模型和大语言模型(LLMs)。鉴于空间关系本质上是以视觉空间方式被理解的,我们开发了一种将LLM输出转换为图像的方法,从而能够以视觉方式同时评估T2I模型和LLMs。我们考察了8个主流生成模型(3个T2I模型和5个LLMs)对一组10个常见介词的空间关系理解能力,并评估了自动评估方法的可行性。令人惊讶的是,我们发现T2I模型尽管具备出色的通用图像生成能力,但其表现仅处于次优水平。更令人意外的是,我们的结果表明,LLMs在生成空间关系方面显著优于T2I模型,尽管LLMs主要基于文本数据进行训练。我们探究了模型失败的原因,并指出了可填补的空白,以期实现更具空间保真度的生成。