Recent years have witnessed the rapid development of general multimodal large language models (MLLMs). However, adapting general MLLMs to specific domains, such as scientific fields and industrial applications, remains less explored. This paper systematically investigates domain adaptation of MLLMs through post-training, focusing on data synthesis, training pipelines, and task evaluation. (1) Data Synthesis: Using open-source models, we develop a visual instruction synthesizer that effectively generates diverse visual instruction tasks from domain-specific image-caption pairs. Our synthetic tasks surpass those generated by manual rules, GPT-4, and GPT-4V in enhancing the domain-specific performance of MLLMs. (2) Training Pipeline: While the two-stage training--initially on image-caption pairs followed by visual instruction tasks--is commonly adopted for developing general MLLMs, we apply a single-stage training pipeline to enhance task diversity for domain-specific post-training. (3) Task Evaluation: We conduct experiments in two domains, biomedicine and food, by post-training MLLMs of different sources and scales (e.g., Qwen2-VL-2B, LLaVA-v1.6-8B, Llama-3.2-11B), and then evaluating MLLM performance on various domain-specific tasks. To support further research in MLLM domain adaptation, we will open-source our implementations.
翻译:近年来,通用多模态大语言模型(MLLMs)发展迅速。然而,将通用MLLMs适配到特定领域(如科学领域和工业应用)的研究仍相对不足。本文通过后训练的方式,系统性地研究了MLLMs的领域适应问题,重点关注数据合成、训练流程和任务评估三个方面。(1)数据合成:我们利用开源模型,开发了一个视觉指令合成器,能够有效地从领域特定的图像-文本对中生成多样化的视觉指令任务。我们的合成任务在提升MLLMs领域特定性能方面,超越了基于人工规则、GPT-4以及GPT-4V生成的任务。(2)训练流程:虽然开发通用MLLMs通常采用两阶段训练(先在图像-文本对上预训练,再在视觉指令任务上微调),但我们在领域特定后训练中采用了单阶段训练流程,以增强任务多样性。(3)任务评估:我们在生物医学和食品两个领域进行了实验,对不同来源和规模的MLLMs(例如Qwen2-VL-2B、LLaVA-v1.6-8B、Llama-3.2-11B)进行后训练,然后评估它们在各种领域特定任务上的性能。为了支持MLLM领域适应研究的进一步发展,我们将开源我们的实现。