The ability to perform complex tasks from detailed instructions is a key to many remarkable achievements of our species. As humans, we are not only capable of performing a wide variety of tasks but also very complex ones that may entail hundreds or thousands of steps to complete. Large language models and their more recent multimodal counterparts that integrate textual and visual inputs have achieved unprecedented success in performing complex tasks. Yet, most existing benchmarks are largely confined to single-modality inputs (either text or vision), narrowing the scope of multimodal assessments, particularly for instruction-following in multimodal contexts. To bridge this gap, we introduce the instructed-Virtual VISual Decision Making (iWISDM) environment engineered to generate a limitless array of vision-language tasks of varying complexity. Using iWISDM, we compiled three distinct benchmarks of instruction following visual tasks across varying complexity levels and evaluated several newly developed multimodal models on these benchmarks. Our findings establish iWISDM as a robust benchmark for assessing the instructional adherence of both existing and emergent multimodal models and highlight a large gap between these models' ability to precisely follow instructions with that of humans.The code of iWISDM is available on GitHub at https://github.com/BashivanLab/iWISDM.
翻译:执行复杂指令任务的能力是我们人类取得诸多卓越成就的关键。作为人类,我们不仅能够执行多种多样的任务,还能完成那些可能需要数百甚至数千步骤的极其复杂的任务。大型语言模型及其最近整合了文本和视觉输入的多模态对应模型,在执行复杂任务方面取得了前所未有的成功。然而,现有的大多数基准测试主要局限于单模态输入(文本或视觉),这缩小了多模态评估的范围,特别是在多模态语境下的指令遵循方面。为了弥补这一差距,我们引入了指令驱动的虚拟视觉决策制定环境,旨在生成无限多样、复杂度各异的视觉语言任务。利用iWISDM,我们汇编了三个不同复杂度级别的指令遵循视觉任务基准测试,并在此基准上评估了多个新开发的多模态模型。我们的研究结果表明,iWISDM是评估现有及新兴多模态模型指令遵循能力的稳健基准,并揭示了这些模型在精确遵循指令方面与人类能力之间存在巨大差距。iWISDM的代码可在GitHub上获取:https://github.com/BashivanLab/iWISDM。