The creation of high-quality human-labeled image-caption datasets presents a significant bottleneck in the development of Visual-Language Models (VLMs). In this work, we investigate an approach that leverages the strengths of Large Language Models (LLMs) and image generation models to create synthetic image-text pairs for efficient and effective VLM training. Our method employs a pretrained text-to-image model to synthesize image embeddings from captions generated by an LLM. Despite the text-to-image model and VLM initially being trained on the same data, our approach leverages the image generator's ability to create novel compositions, resulting in synthetic image embeddings that expand beyond the limitations of the original dataset. Extensive experiments demonstrate that our VLM, finetuned on synthetic data achieves comparable performance to models trained solely on human-annotated data, while requiring significantly less data. Furthermore, we perform a set of analyses on captions which reveals that semantic diversity and balance are key aspects for better downstream performance. Finally, we show that synthesizing images in the image embedding space is 25\% faster than in the pixel space. We believe our work not only addresses a significant challenge in VLM training but also opens up promising avenues for the development of self-improving multi-modal models.
翻译:高质量人工标注的图像-描述数据集构建是视觉-语言模型发展中的一个主要瓶颈。本研究探索了一种方法,利用大语言模型和图像生成模型的优势,创建合成的图像-文本对,以实现高效且有效的VLM训练。我们的方法采用预训练的文本到图像模型,根据LLM生成的描述合成图像嵌入。尽管文本到图像模型与VLM最初基于相同的数据进行训练,但我们的方法利用了图像生成器创建新颖组合的能力,从而产生的合成图像嵌入能够突破原始数据集的限制。大量实验表明,在合成数据上微调的VLM,其性能可与仅使用人工标注数据训练的模型相媲美,同时所需数据量显著减少。此外,我们对描述进行了一系列分析,结果表明语义多样性与平衡性是提升下游性能的关键因素。最后,我们证明了在图像嵌入空间合成图像比在像素空间快25%。我们相信,这项工作不仅解决了VLM训练中的一个重大挑战,也为开发自我改进的多模态模型开辟了有前景的途径。