In this work, we propose a novel method (GLOV) enabling Large Language Models (LLMs) to act as implicit Optimizers for Vision-Langugage Models (VLMs) to enhance downstream vision tasks. Our GLOV meta-prompts an LLM with the downstream task description, querying it for suitable VLM prompts (e.g., for zero-shot classification with CLIP). These prompts are ranked according to a purity measure obtained through a fitness function. In each respective optimization step, the ranked prompts are fed as in-context examples (with their accuracies) to equip the LLM with the knowledge of the type of text prompts preferred by the downstream VLM. Furthermore, we also explicitly steer the LLM generation process in each optimization step by specifically adding an offset difference vector of the embeddings from the positive and negative solutions found by the LLM, in previous optimization steps, to the intermediate layer of the network for the next generation step. This offset vector steers the LLM generation toward the type of language preferred by the downstream VLM, resulting in enhanced performance on the downstream vision tasks. We comprehensively evaluate our GLOV on 16 diverse datasets using two families of VLMs, i.e., dual-encoder (e.g., CLIP) and encoder-decoder (e.g., LLaVa) models -- showing that the discovered solutions can enhance the recognition performance by up to 15.0% and 57.5% (3.8% and 21.6% on average) for these models.
翻译:本文提出了一种新颖的方法(GLOV),使大型语言模型(LLMs)能够作为视觉语言模型(VLMs)的隐式优化器,以增强下游视觉任务性能。我们的GLOV方法通过下游任务描述对LLM进行元提示,查询其生成适用于VLM的提示(例如用于CLIP的零样本分类)。这些提示根据通过适应度函数获得的纯度度量进行排序。在每一步优化过程中,排序后的提示(附带其准确率)作为上下文示例输入LLM,使其掌握下游VLM偏好的文本提示类型。此外,我们还在每个优化步骤中显式引导LLM生成过程:将先前优化步骤中LLM发现的正负解嵌入向量的偏移差分向量,添加到网络中间层以指导下一轮生成。该偏移向量使LLM生成趋向于下游VLM偏好的语言类型,从而提升下游视觉任务性能。我们在16个多样化数据集上使用两类VLM(即双编码器模型如CLIP和编码器-解码器模型如LLaVa)对GLOV进行全面评估,结果表明所发现的解决方案可使这些模型的识别性能最高提升15.0%和57.5%(平均提升3.8%和21.6%)。