Large visual-language models (LVLMs) have achieved great success in multiple applications. However, they still encounter challenges in complex scenes, especially those involving camouflaged objects. This is primarily due to the lack of samples related to camouflaged scenes in the training dataset. To mitigate this issue, we construct the MM-CamObj dataset for the first time, comprising two subsets: CamObj-Align and CamObj-Instruct. Specifically, CamObj-Align contains 11,363 image-text pairs, and it is designed for VL alignment and injecting rich knowledge of camouflaged scenes into LVLMs. CamObj-Instruct is collected for fine-tuning the LVLMs with improved instruction-following capabilities, and it includes 11,363 images and 68,849 conversations with diverse instructions. Based on the MM-CamObj dataset, we propose the CamObj-Llava, an LVLM specifically designed for addressing tasks in camouflaged scenes. To facilitate our model's effective acquisition of knowledge about camouflaged objects and scenes, we introduce a curriculum learning strategy with six distinct modes. Additionally, we construct the CamObj-Bench to evaluate the existing LVLMs' capabilities of understanding, recognition, localization and count in camouflage scenes. This benchmark includes 600 images and 7 tasks, with a total of 9,449 questions. Extensive experiments are conducted on the CamObj-Bench with CamObj-Llava, 8 existing open-source and 3 closed-source LVLMs. Surprisingly, the results indicate that our model achieves a 25.84% improvement in 4 out of 7 tasks compared to GPT-4o. Code and datasets will be available at https://github.com/JCruan519/MM-CamObj.
翻译:大型视觉语言模型(LVLMs)已在多种应用中取得显著成功。然而,在复杂场景中,尤其是涉及伪装物体的场景中,它们仍面临挑战。这主要是由于训练数据集中缺乏与伪装场景相关的样本。为缓解此问题,我们首次构建了MM-CamObj数据集,包含两个子集:CamObj-Align与CamObj-Instruct。具体而言,CamObj-Align包含11,363个图文对,专为视觉语言对齐任务设计,旨在向LVLMs注入丰富的伪装场景知识。CamObj-Instruct则用于微调LVLMs以提升其指令跟随能力,包含11,363张图像及68,849段涵盖多样化指令的对话。基于MM-CamObj数据集,我们提出了CamObj-Llava——一种专门用于处理伪装场景任务的LVLM。为促进模型有效学习伪装物体与场景知识,我们引入了包含六种不同模式的课程学习策略。此外,我们构建了CamObj-Bench以评估现有LVLMs在伪装场景中的理解、识别、定位与计数能力。该基准包含600张图像及7项任务,共计9,449个问题。我们在CamObj-Bench上对CamObj-Llava、8个现有开源及3个闭源LVLMs进行了广泛实验。值得注意的是,结果表明在7项任务中的4项上,我们的模型相比GPT-4o实现了25.84%的性能提升。代码与数据集将在https://github.com/JCruan519/MM-CamObj 公开。