Recently Multimodal Large Language Models (MLLMs) have achieved considerable advancements in vision-language tasks, yet produce potentially harmful or untrustworthy content. Despite substantial work investigating the trustworthiness of language models, MMLMs' capability to act honestly, especially when faced with visually unanswerable questions, remains largely underexplored. This work presents the first systematic assessment of honesty behaviors across various MLLMs. We ground honesty in models' response behaviors to unanswerable visual questions, define four representative types of such questions, and construct MoHoBench, a large-scale MMLM honest benchmark, consisting of 12k+ visual question samples, whose quality is guaranteed by multi-stage filtering and human verification. Using MoHoBench, we benchmarked the honesty of 28 popular MMLMs and conducted a comprehensive analysis. Our findings show that: (1) most models fail to appropriately refuse to answer when necessary, and (2) MMLMs' honesty is not solely a language modeling issue, but is deeply influenced by visual information, necessitating the development of dedicated methods for multimodal honesty alignment. Therefore, we implemented initial alignment methods using supervised and preference learning to improve honesty behavior, providing a foundation for future work on trustworthy MLLMs. Our data and code can be found at https://github.com/yanxuzhu/MoHoBench.
翻译:近年来,多模态大语言模型(MLLMs)在视觉-语言任务中取得了显著进展,但也可能生成有害或不可信的内容。尽管已有大量工作研究语言模型的可信度,但 MLLMs 在面对视觉上不可回答的问题时能否保持诚实,这一问题在很大程度上仍未得到充分探索。本研究首次系统评估了各类 MLLMs 的诚实行为。我们将诚实性定义为模型对不可回答的视觉问题的响应行为,归纳了四类具有代表性的不可回答视觉问题,并构建了 MoHoBench——一个大规模 MLLM 诚实性基准,包含超过 12,000 个视觉问题样本,其质量通过多阶段筛选和人工验证得到保证。利用 MoHoBench,我们对 28 个主流 MLLMs 的诚实性进行了基准测试并开展了全面分析。我们的研究发现:(1)大多数模型在必要时未能恰当地拒绝回答;(2)MLLMs 的诚实性不仅仅是语言建模问题,还深受视觉信息的影响,因此需要开发专门的方法进行多模态诚实性对齐。为此,我们通过监督学习和偏好学习实现了初步的对齐方法以改进诚实行为,为未来可信 MLLMs 的研究奠定了基础。我们的数据与代码可在 https://github.com/yanxuzhu/MoHoBench 获取。