In this paper we deal with the task of Disturbing Image Detection (DID), exploiting knowledge encoded in Large Multimodal Models (LMMs). Specifically, we propose to exploit LMM knowledge in a two-fold manner: first by extracting generic semantic descriptions, and second by extracting elicited emotions. Subsequently, we use the CLIP's text encoder in order to obtain the text embeddings of both the generic semantic descriptions and LMM-elicited emotions. Finally, we use the aforementioned text embeddings along with the corresponding CLIP's image embeddings for performing the DID task. The proposed method significantly improves the baseline classification accuracy, achieving state-of-the-art performance on the augmented Disturbing Image Detection dataset.
翻译:本文研究干扰图像检测任务,利用大型多模态模型中编码的知识。具体而言,我们提出以双重方式利用LMM知识:首先提取通用语义描述,其次提取诱发情感。随后,我们使用CLIP的文本编码器获取通用语义描述和LMM诱发情感的文本嵌入。最后,结合上述文本嵌入与对应的CLIP图像嵌入执行干扰图像检测任务。所提方法显著提升了基线分类准确率,在增强版干扰图像检测数据集上实现了最先进的性能。