Cross-lingual speech emotion recognition (SER) is important for a wide range of everyday applications. While recent SER research relies heavily on large pretrained models for emotion training, existing studies often concentrate solely on the final transformer layer of these models. However, given the task-specific nature and hierarchical architecture of these models, each transformer layer encapsulates different levels of information. Leveraging this hierarchical structure, our study focuses on the information embedded across different layers. Through an examination of layer feature similarity across different languages, we propose a novel strategy called a layer-anchoring mechanism to facilitate emotion transfer in cross-lingual SER tasks. Our approach is evaluated using two distinct language affective corpora (MSP-Podcast and BIIC-Podcast), achieving a best UAR performance of 60.21% on the BIIC-podcast corpus. The analysis uncovers interesting insights into the behavior of popular pretrained models.
翻译:跨语言语音情感识别(SER)对于广泛的日常应用至关重要。尽管近期的SER研究严重依赖大型预训练模型进行情感训练,但现有研究通常仅关注这些模型的最终Transformer层。然而,考虑到这些模型的任务特定性质和分层架构,每个Transformer层都封装了不同层次的信息。利用这种分层结构,我们的研究重点关注嵌入在不同层中的信息。通过对不同语言间层级特征相似性的考察,我们提出了一种称为层级锚定机制的新策略,以促进跨语言SER任务中的情感迁移。我们的方法使用两个不同的语言情感语料库(MSP-Podcast和BIIC-Podcast)进行评估,在BIIC-podcast语料库上取得了60.21%的最佳未加权平均召回率(UAR)性能。该分析揭示了关于流行预训练模型行为的有趣见解。