Scene Graph Generation (SGG) provides basic language representation of visual scenes, requiring models to grasp complex and diverse semantics between objects. This complexity and diversity in SGG leads to underrepresentation, where parts of triplet labels are rare or even unseen during training, resulting in imprecise predictions. To tackle this, we propose integrating the pretrained Vision-language Models to enhance representation. However, due to the gap between pretraining and SGG, direct inference of pretrained VLMs on SGG leads to severe bias, which stems from the imbalanced predicates distribution in the pretraining language set. To alleviate the bias, we introduce a novel LM Estimation to approximate the unattainable predicates distribution. Finally, we ensemble the debiased VLMs with SGG models to enhance the representation, where we design a certainty-aware indicator to score each sample and dynamically adjust the ensemble weights. Our training-free method effectively addresses the predicates bias in pretrained VLMs, enhances SGG's representation, and significantly improve the performance.
翻译:场景图生成(SGG)为视觉场景提供基础的语言表示,要求模型掌握物体间复杂多样的语义关系。SGG中这种复杂性和多样性导致了表征不足问题,即部分三元组标签在训练中罕见甚至从未出现,从而导致预测不精确。为解决此问题,我们提出集成预训练的视觉-语言模型以增强表征能力。然而,由于预训练任务与SGG之间存在差异,直接在SGG任务上推理预训练的VLM会导致严重偏差,这种偏差源于预训练语言集中谓词分布的不平衡。为缓解该偏差,我们引入了一种新颖的语言模型估计方法,以近似不可获取的谓词分布。最后,我们将去偏后的VLM与SGG模型集成以增强表征能力,并设计了一种确定性感知指示器对每个样本进行评分,动态调整集成权重。我们无需训练的方法有效解决了预训练VLM中的谓词偏差问题,增强了SGG的表征能力,并显著提升了性能。