Enhancing semantic grounding abilities in Vision-Language Models (VLMs) often involves collecting domain-specific training data, refining the network architectures, or modifying the training recipes. In this work, we venture into an orthogonal direction and explore whether VLMs can improve their semantic grounding by "receiving" feedback, without requiring in-domain data, fine-tuning, or modifications to the network architectures. We systematically analyze this hypothesis using a feedback mechanism composed of a binary signal. We find that if prompted appropriately, VLMs can utilize feedback both in a single step and iteratively, showcasing the potential of feedback as an alternative technique to improve grounding in internet-scale VLMs. Furthermore, VLMs, like LLMs, struggle to self-correct errors out-of-the-box. However, we find that this issue can be mitigated via a binary verification mechanism. Finally, we explore the potential and limitations of amalgamating these findings and applying them iteratively to automatically enhance VLMs' grounding performance, showing grounding accuracy consistently improves using automated feedback across all models in all settings investigated. Overall, our iterative framework improves semantic grounding in VLMs by more than 15 accuracy points under noise-free feedback and up to 5 accuracy points under a simple automated binary verification mechanism. The project website is hosted at https://andrewliao11.github.io/vlms_feedback
翻译:增强视觉语言模型(VLM)的语义基础能力通常涉及收集特定领域的训练数据、优化网络架构或修改训练策略。本文探索了一个正交方向,研究VLM能否通过“接收”反馈来提升其语义基础能力,而无需依赖领域内数据、微调或网络架构修改。我们采用由二元信号组成的反馈机制系统分析了这一假设。研究发现,在适当提示下,VLM既能单步利用反馈,也能通过迭代方式利用反馈,展现了反馈作为替代技术提升互联网规模VLM基础能力的潜力。此外,与LLM类似,VLM难以开箱即用地自我纠正错误,但这一问题可通过二元验证机制缓解。最后,我们探索了综合这些发现并迭代应用于自动增强VLM基础性能的潜力与局限性,结果表明在所有设置和模型下,基于自动化反馈的迭代方法均能持续提升基础精度。总体而言,我们的迭代框架在无噪声反馈下使VLM的语义基础准确率提升超过15个百分点,在简单自动化二元验证机制下提升高达5个百分点。项目网站访问地址为:https://andrewliao11.github.io/vlms_feedback