This paper extensively investigates the effectiveness of synthetic training data to improve the capabilities of vision-and-language models for grounding textual descriptions to image regions. We explore various strategies to best generate image-text pairs and image-text-box triplets using a series of pretrained models under different settings and varying degrees of reliance on real data. Through comparative analyses with synthetic, real, and web-crawled data, we identify factors that contribute to performance differences, and propose SynGround, an effective pipeline for generating useful synthetic data for visual grounding. Our findings show that SynGround can improve the localization capabilities of off-the-shelf vision-and-language models and offers the potential for arbitrarily large scale data generation. Particularly, data generated with SynGround improves the pointing game accuracy of a pretrained ALBEF and BLIP models by 4.81% and 17.11% absolute percentage points, respectively, across the RefCOCO+ and the Flickr30k benchmarks.
翻译:本文深入研究了利用合成训练数据提升视觉-语言模型将文本描述定位到图像区域能力的有效性。我们探索了在不同设置及对真实数据依赖程度各异的条件下,利用一系列预训练模型生成图像-文本对及图像-文本-边界框三元组的最佳策略。通过对合成数据、真实数据及网络爬取数据的对比分析,我们揭示了导致性能差异的关键因素,并提出了SynGround——一种为视觉定位生成有效合成数据的高效流程。研究表明,SynGround能够提升现有视觉-语言模型的定位能力,并具备无限扩展数据生成规模的潜力。特别地,在RefCOCO+和Flickr30k基准测试中,使用SynGround生成的数据将预训练ALBEF和BLIP模型的指向游戏准确率分别提升了4.81%和17.11%的绝对百分比。