We present a framework for optimizing prompts in vision-language models to elicit multimodal reasoning without model retraining. Using an evolutionary algorithm to guide prompt updates downstream of visual tasks, our approach improves upon baseline prompt-updating algorithms, which lack evolution-style "survival of the fittest" iteration. Crucially, we find this approach enables the language model to independently discover progressive problem-solving techniques across several evolution generations. For example, the model reasons that to "break down" visually complex spatial tasks, making a tool call to a Python interpreter to perform tasks (such as cropping, image segmentation, or saturation changes) would improve performance significantly. Our experimentation shows that explicitly evoking this "tool calling" call, via system-level XML $...\texttt{<tool>} ... \texttt{</tool>}...$ tags, can effectively flag Python interpreter access for the same language model to generate relevant programs, generating advanced multimodal functionality. This functionality can be crystallized into a system-level prompt that induces improved performance at inference time, and our experimentation suggests up to $\approx 50\%$ relative improvement across select visual tasks. Downstream performance is trained and evaluated across subtasks from MathVista, M3CoT, and GeoBench-VLM datasets. Importantly, our approach shows that evolutionary prompt optimization guides language models towards self-reasoning discoveries, which result in improved zero-shot generalization across tasks.
翻译:我们提出了一种优化视觉语言模型提示的框架,旨在无需重新训练模型即可激发多模态推理能力。该方法采用进化算法来引导视觉任务下游的提示更新,相较于缺乏进化式“适者生存”迭代机制的基线提示更新算法,我们的方法取得了显著改进。关键的是,我们发现这种方法使得语言模型能够在多个进化世代中独立发现渐进式问题解决技术。例如,模型推理得出,为了“分解”视觉上复杂的空间任务,通过调用Python解释器执行任务(如裁剪、图像分割或饱和度调整)将显著提升性能。实验表明,通过系统级XML $...\texttt{<tool>} ... \texttt{</tool>}...$标签显式调用这种“工具调用”功能,可以有效标记Python解释器访问权限,使同一语言模型生成相关程序,从而实现高级多模态功能。该功能可固化为系统级提示,在推理时诱导性能提升,实验数据显示在部分视觉任务中相对改进高达约$50\%$。下游性能在MathVista、M3CoT和GeoBench-VLM数据集的子任务上进行训练和评估。重要的是,我们的研究表明进化式提示优化能引导语言模型实现自我推理发现,从而提升跨任务的零样本泛化能力。