Vision language models (VLMs) have achieved impressive progress in diverse applications, becoming a prevalent research direction. In this paper, we build FIRE, a feedback-refinement dataset, consisting of 1.1M multi-turn conversations that are derived from 27 source datasets, empowering VLMs to spontaneously refine their responses based on user feedback across diverse tasks. To scale up the data collection, FIRE is collected in two components: FIRE-100K and FIRE-1M, where FIRE-100K is generated by GPT-4V, and FIRE-1M is freely generated via models trained on FIRE-100K. Then, we build FIRE-Bench, a benchmark to comprehensively evaluate the feedback-refining capability of VLMs, which contains 11K feedback-refinement conversations as the test data, two evaluation settings, and a model to provide feedback for VLMs. We develop the FIRE-LLaVA model by fine-tuning LLaVA on FIRE-100K and FIRE-1M, which shows remarkable feedback-refining capability on FIRE-Bench and outperforms untrained VLMs by 50%, making more efficient user-agent interactions and underscoring the significance of the FIRE dataset.
翻译:视觉语言模型(VLMs)在多样化应用中取得了显著进展,已成为一个主流研究方向。本文构建了FIRE——一个反馈精炼数据集,包含从27个源数据集衍生的110万轮多轮对话,使VLMs能够在多样化任务中基于用户反馈自主优化其响应。为扩大数据规模,FIRE通过两个组件进行构建:FIRE-100K与FIRE-1M。其中FIRE-100K由GPT-4V生成,FIRE-1M则通过基于FIRE-100K训练的模型自由生成。随后,我们构建了FIRE-Bench基准测试,用于全面评估VLMs的反馈精炼能力。该基准包含1.1万轮反馈精炼对话作为测试数据,两种评估设置,以及一个为VLMs提供反馈的模型。通过在FIRE-100K和FIRE-1M上微调LLaVA,我们开发了FIRE-LLaVA模型。该模型在FIRE-Bench上展现出卓越的反馈精炼能力,较未训练VLMs性能提升50%,实现了更高效的人机交互,凸显了FIRE数据集的重要价值。