Large vision-language models (LVLMs) exhibit remarkable capabilities in cross-modal tasks but face significant safety challenges, which undermine their reliability in real-world applications. Efforts have been made to build LVLM safety evaluation benchmarks to uncover their vulnerability. However, existing benchmarks are hindered by their labor-intensive construction process, static complexity, and limited discriminative power. Thus, they may fail to keep pace with rapidly evolving models and emerging risks. To address these limitations, we propose VLSafetyBencher, the first automated system for LVLM safety benchmarking. VLSafetyBencher introduces four collaborative agents: Data Preprocessing, Generation, Augmentation, and Selection agents to construct and select high-quality samples. Experiments validates that VLSafetyBencher can construct high-quality safety benchmarks within one week at a minimal cost. The generated benchmark effectively distinguish safety, with a safety rate disparity of 70% between the most and least safe models.
翻译:大型视觉语言模型(LVLMs)在跨模态任务中展现出卓越能力,但也面临严峻的安全挑战,这削弱了其在现实应用中的可靠性。已有研究致力于构建LVLM安全评估基准以揭示其脆弱性。然而,现有基准受限于其劳动密集型的构建过程、静态复杂性及有限的判别能力,可能难以跟上快速演进的模型与新涌现的风险。为应对这些局限,我们提出了首个面向LVLM安全基准测试的自动化系统VLSafetyBencher。该系统引入四个协同智能体:数据预处理、生成、增强与选择智能体,以构建并筛选高质量样本。实验验证表明,VLSafetyBencher能够以极低成本在一周内构建高质量安全基准。所生成的基准能有效区分安全性,最安全与最不安全模型之间的安全率差异达70%。