A diverse range of large language models (LLMs), e.g., ChatGPT, and visual question answering (VQA) models, e.g., BLIP, have been developed for solving textual and visual question answering tasks. However, fine-tuning these models is either difficult, as it requires access via APIs, rendering them as black-boxes, or costly due to the need of tuning a large number of parameters. To address this, we introduce InfoSel, a data-efficient ensemble method that learns to dynamically pick the winner from existing black-box models for predictions on both textual and multimodal visual question answering tasks. Unlike traditional ensemble models, InfoSel does not rely on prediction probabilities or confidences, which typically are not available in black-box models. Experimental results on four datasets demonstrate that our approach achieves an absolute increase of up to +5.19\% in the F1-score compared to standalone LLMs using only 1K training instances.
翻译:针对文本与视觉问答任务,目前已开发出多样化的大型语言模型(如ChatGPT)与视觉问答模型(如BLIP)。然而,对这些模型进行微调存在困难:一方面,由于需通过API接口访问,这些模型通常以黑盒形式呈现;另一方面,调整海量参数会导致高昂成本。为此,我们提出InfoSel——一种数据高效的集成方法,该方法通过学习动态选择现有黑盒模型中的优胜者,以完成文本及多模态视觉问答任务的预测。与传统集成模型不同,InfoSel不依赖于预测概率或置信度(这些指标在黑盒模型中通常不可获取)。在四个数据集上的实验结果表明,仅使用1K训练实例时,我们的方法相较于独立大型语言模型在F1分数上实现了最高+5.19%的绝对提升。