Understanding human perceptions presents a formidable multimodal challenge for computers, encompassing aspects such as sentiment tendencies and sense of humor. While various methods have recently been introduced to extract modality-invariant and specific information from diverse modalities, with the goal of enhancing the efficacy of multimodal learning, few works emphasize this aspect in large language models. In this paper, we introduce a novel multimodal prompt strategy tailored for tuning large language models. Our method assesses the correlation among different modalities and isolates the modality-invariant and specific components, which are then utilized for prompt tuning. This approach enables large language models to efficiently and effectively assimilate information from various modalities. Furthermore, our strategy is designed with scalability in mind, allowing the integration of features from any modality into pretrained large language models. Experimental results on public datasets demonstrate that our proposed method significantly improves performance compared to previous methods.
翻译:理解人类感知(如情感倾向和幽默感)对计算机而言是一项极具挑战性的多模态任务。尽管近年来已有多种方法被引入以从不同模态中提取模态不变性与特异性信息,旨在提升多模态学习的效果,但很少有研究在大语言模型中强调这一点。本文提出了一种专为大语言模型调优设计的新型多模态提示策略。该方法通过评估不同模态之间的相关性,分离出模态不变性与特异性成分,并将其用于提示调优。这一策略使大语言模型能够高效且有效地吸收来自不同模态的信息。此外,我们的策略在设计上注重可扩展性,允许将任意模态的特征整合到预训练的大语言模型中。在公开数据集上的实验结果表明,与先前方法相比,我们提出的方法显著提升了性能。