The development of Large Vision-Language Models (LVLMs) is striving to catch up with the success of Large Language Models (LLMs), yet it faces more challenges to be resolved. Very recent works enable LVLMs to localize object-level visual contents and ground text to them. Nonetheless, current LVLMs still struggle to precisely understand visual relations due to the lack of relevant data. In this work, we present RelationVLM, a large vision-language model capable of comprehending various levels and types of relations whether across multiple images or within a video. Specifically, we devise a multi-stage relation-aware training scheme and a series of corresponding data configuration strategies to bestow RelationVLM with the capabilities of understanding semantic relations, temporal associations and geometric transforms. Extensive case studies and quantitative evaluations show RelationVLM has strong capability in understanding such relations and emerges impressive in-context capability of reasoning from few-shot examples by comparison. This work fosters the advancements of LVLMs by enabling them to support a wider range of downstream applications toward artificial general intelligence.
翻译:大规模视觉语言模型(LVLMs)的发展正努力追赶大规模语言模型(LLMs)的成功,但仍面临更多待解决的挑战。近期研究使LVLMs能够定位对象级视觉内容并将其与文本对齐。然而,由于缺乏相关数据,当前LVLMs仍难以精确理解视觉关系。本文提出RelationVLM——一种能够理解跨多图像或视频内各层级、各类型关系的大规模视觉语言模型。具体而言,我们设计了多阶段关系感知训练方案及一系列相应的数据配置策略,使RelationVLM具备理解语义关系、时间关联和几何变换的能力。广泛案例研究与定量评估表明,RelationVLM在理解此类关系方面具有强大能力,并通过少样本示例推理展现出令人印象深刻的情境学习能力。本工作通过使LVLMs支持更广泛的面向通用人工智能的下游应用,推动了其发展进程。