Multi-Agent System (MAS) powered by Visual Language Models (VLMs) enables challenging tasks but suffers from a novel failure term, multi-agent visual hallucination snowballing, where hallucinations are seeded in a single agent and amplified by following ones due to the over-reliance on textual flow to relay visual information. Through turn-, layer-, and token-wise attention analyses, we provide detailed insights into the essence of hallucination snowballing regarding the reduction of visual attention allocation. It leads us to identify a subset of vision tokens with a unimodal attention peak in middle layers that best preserve visual evidence but gradually diminish in deeper agent turns, resulting in the visual hallucination snowballing in MAS. Thus, we propose ViF, a lightweight, plug-and-play mitigation paradigm that relays inter-agent messages with Visual Flow powered by the selected visual relay tokens and applies attention reallocation to amplify this pattern. The experiment results demonstrate that our method markedly reduces hallucination snowballing, consistently improving the performance across eight benchmarks based on four common MAS structures and ten base models. The source code is publicly available at: https://github.com/YU-deep/ViF.git.
翻译:基于视觉语言模型(VLM)的多智能体系统(MAS)能够执行具有挑战性的任务,但面临一种新的失效模式——多智能体视觉幻觉雪球效应。该效应指幻觉在单个智能体中产生,并因后续智能体过度依赖文本流传递视觉信息而被放大。通过回合、层级和词元层面的注意力分析,我们深入揭示了幻觉雪球效应的本质,即视觉注意力分配的减少。研究发现,在中间层存在单峰注意力峰值的视觉词元子集能最佳保留视觉证据,但在更深的智能体交互回合中逐渐衰减,最终导致MAS中的视觉幻觉雪球效应。为此,我们提出ViF——一种轻量级即插即用的缓解范式。该方法通过选定的视觉中继词元驱动的视觉流传递智能体间消息,并应用注意力重分配机制强化该模式。实验结果表明,我们的方法显著减少了幻觉雪球效应,在基于四种常见MAS结构和十个基础模型的八个基准测试中持续提升性能。源代码已公开于:https://github.com/YU-deep/ViF.git。