Weakly-Supervised Scene Graph Generation (WSSGG) research has recently emerged as an alternative to the fully-supervised approach that heavily relies on costly annotations. In this regard, studies on WSSGG have utilized image captions to obtain unlocalized triplets while primarily focusing on grounding the unlocalized triplets over image regions. However, they have overlooked the two issues involved in the triplet formation process from the captions: 1) Semantic over-simplification issue arises when extracting triplets from captions, where fine-grained predicates in captions are undesirably converted into coarse-grained predicates, resulting in a long-tailed predicate distribution, and 2) Low-density scene graph issue arises when aligning the triplets in the caption with entity/predicate classes of interest, where many triplets are discarded and not used in training, leading to insufficient supervision. To tackle the two issues, we propose a new approach, i.e., Large Language Model for weakly-supervised SGG (LLM4SGG), where we mitigate the two issues by leveraging the LLM's in-depth understanding of language and reasoning ability during the extraction of triplets from captions and alignment of entity/predicate classes with target data. To further engage the LLM in these processes, we adopt the idea of Chain-of-Thought and the in-context few-shot learning strategy. To validate the effectiveness of LLM4SGG, we conduct extensive experiments on Visual Genome and GQA datasets, showing significant improvements in both Recall@K and mean Recall@K compared to the state-of-the-art WSSGG methods. A further appeal is that LLM4SGG is data-efficient, enabling effective model training with a small amount of training images.
翻译:弱监督场景图生成研究近年来作为对严重依赖昂贵标注的全监督方法的替代方案而兴起。在这方面,WSSGG研究利用图像描述来获取未定位三元组,同时主要关注将这些未定位三元组与图像区域进行关联。然而,它们忽略了从描述中形成三元组过程中涉及的两个问题:1)从描述中提取三元组时会出现语义过度简化问题,即描述中的细粒度谓词被不恰当地转换为粗粒度谓词,导致谓词分布呈现长尾特性;2)将描述中的三元组与目标实体/谓词类别对齐时会出现低密度场景图问题,即大量三元组被丢弃而未用于训练,导致监督信号不足。为解决这两个问题,我们提出了一种新方法,即基于大语言模型的弱监督SGG,通过利用LLM在从描述中提取三元组以及将实体/谓词类别与目标数据对齐过程中对语言的深度理解和推理能力来缓解上述问题。为进一步引导LLM参与这些过程,我们采用了思维链思想和上下文少样本学习策略。为验证LLM4SGG的有效性,我们在Visual Genome和GQA数据集上进行了大量实验,结果表明与最先进的WSSGG方法相比,该方法在Recall@K和平均Recall@K指标上均取得了显著提升。另一个突出优势是LLM4SGG具有数据高效性,能够仅用少量训练图像实现有效的模型训练。