We present a lightweight multimodal baseline for emotion recognition in conversations using the SemEval-2024 Task 3 dataset built from the sitcom Friends. The goal of this report is not to propose a novel state-of-the-art method, but to document an accessible reference implementation that combines (i) a transformer-based text classifier and (ii) a self-supervised speech representation model, with a simple late-fusion ensemble. We report the baseline setup and empirical results obtained under a limited training protocol, highlighting when multimodal fusion improves over unimodal models. This preprint is provided for transparency and to support future, more rigorous comparisons.
翻译:我们提出了一种轻量级多模态基准方法,用于基于情景喜剧《老友记》构建的SemEval-2024任务3数据集进行对话情感识别。本报告的目标并非提出新颖的尖端方法,而是记录一个易于使用的参考实现,该实现结合了(i)基于Transformer的文本分类器与(ii)自监督语音表征模型,并通过简单的后期融合集成。我们报告了在有限训练协议下获得的基准设置与实证结果,重点说明了多模态融合在何时优于单模态模型。本预印本的发布旨在提高透明度,并为未来更严谨的比较研究提供支持。