VLMs (Vision-Language Models) extend the capabilities of LLMs (Large Language Models) to accept multimodal inputs. Since it has been verified that LLMs can be induced to generate harmful or inaccurate content through specific test cases (termed as Red Teaming), how VLMs perform in similar scenarios, especially with their combination of textual and visual inputs, remains a question. To explore this problem, we present a novel red teaming dataset RTVLM, which encompasses 10 subtasks (e.g., image misleading, multi-modal jail-breaking, face fairness, etc) under 4 primary aspects (faithfulness, privacy, safety, fairness). Our RTVLM is the first red-teaming dataset to benchmark current VLMs in terms of these 4 different aspects. Detailed analysis shows that 10 prominent open-sourced VLMs struggle with the red teaming in different degrees and have up to 31% performance gap with GPT-4V. Additionally, we simply apply red teaming alignment to LLaVA-v1.5 with Supervised Fine-tuning (SFT) using RTVLM, and this bolsters the models' performance with 10% in RTVLM test set, 13% in MM-Hal, and without noticeable decline in MM-Bench, overpassing other LLaVA-based models with regular alignment data. This reveals that current open-sourced VLMs still lack red teaming alignment. Our code and datasets will be open-source.
翻译:视觉语言模型(VLM)扩展了大语言模型(LLM)的能力,使其能够接受多模态输入。由于已证实LLM可通过特定测试用例(称为红队测试)被诱导生成有害或不准确内容,VLM在类似场景下的表现(尤其是结合文本与视觉输入时)仍是一个待解问题。为探究此问题,我们提出新型红队测试数据集RTVLM,该数据集涵盖4个主要方面(忠实性、隐私性、安全性、公平性)下的10个子任务(如图像误导、多模态越狱、人脸公平性等)。本RTVLM是首个针对当前VLM在以上4个维度进行基准测试的红队测试数据集。详细分析表明,10个主流开源VLM在不同程度上难以应对红队测试,与GPT-4V的性能差距最高达31%。此外,我们利用RTVLM对LLaVA-v1.5进行基于监督微调(SFT)的红队对齐,使模型在RTVLM测试集上性能提升10%、在MM-Hal上提升13%,同时在MM-Bench上未出现显著下降,性能超越了使用常规对齐数据的其他基于LLaVA的模型。这表明当前开源VLM仍缺乏红队对齐能力。我们的代码和数据集将开源发布。