Understanding the emotions in a dialogue usually requires external knowledge to accurately understand the contents. As the LLMs become more and more powerful, we do not want to settle on the limited ability of the pre-trained language model. However, the LLMs either can only process text modality or are too expensive to process the multimedia information. We aim to utilize both the power of LLMs and the supplementary features from the multimedia modalities. In this paper, we present a framework, Lantern, that can improve the performance of a certain vanilla model by prompting large language models with receptive-field-aware attention weighting. This framework trained a multi-task vanilla model to produce probabilities of emotion classes and dimension scores. These predictions are fed into the LLMs as references to adjust the predicted probabilities of each emotion class with its external knowledge and contextual understanding. We slice the dialogue into different receptive fields, and each sample is included in exactly t receptive fields. Finally, the predictions of LLMs are merged with a receptive-field-aware attention-driven weighting module. In the experiments, vanilla models CORECT and SDT are deployed in Lantern with GPT-4 or Llama-3.1-405B. The experiments in IEMOCAP with 4-way and 6-way settings demonstrated that the Lantern can significantly improve the performance of current vanilla models by up to 1.23% and 1.80%.
翻译:理解对话中的情感通常需要外部知识来准确理解内容。随着大语言模型(LLMs)能力日益强大,我们不应满足于预训练语言模型的有限能力。然而,大语言模型要么只能处理文本模态,要么处理多媒体信息的成本过高。我们的目标是同时利用大语言模型的能力和多媒体模态的补充特征。本文提出一个名为Lantern的框架,它能够通过采用感知野感知注意力加权来提示大语言模型,从而提升特定基础模型的性能。该框架训练了一个多任务基础模型,用于生成情感类别的概率和维度分数。这些预测结果作为参考输入大语言模型,使其能够利用外部知识和上下文理解来调整每个情感类别的预测概率。我们将对话分割为不同的感知野,每个样本恰好包含在t个感知野中。最后,大语言模型的预测通过一个感知野感知的注意力驱动加权模块进行融合。实验中,基础模型CORECT和SDT在Lantern框架中与GPT-4或Llama-3.1-405B结合使用。在IEMOCAP数据集上进行的4分类和6分类实验表明,Lantern能够显著提升现有基础模型的性能,最高提升幅度分别达到1.23%和1.80%。