Speech is the most natural way of expressing ourselves as humans. Identifying emotion from speech is a nontrivial task due to the ambiguous definition of emotion itself. Speaker Emotion Recognition (SER) is essential for understanding human emotional behavior. The SER task is challenging due to the variety of speakers, background noise, complexity of emotions, and speaking styles. It has many applications in education, healthcare, customer service, and Human-Computer Interaction (HCI). Previously, conventional machine learning methods such as SVM, HMM, and KNN have been used for the SER task. In recent years, deep learning methods have become popular, with convolutional neural networks and recurrent neural networks being used for SER tasks. The input of these methods is mostly spectrograms and hand-crafted features. In this work, we study the use of self-supervised transformer-based models, Wav2Vec2 and HuBERT, to determine the emotion of speakers from their voice. The models automatically extract features from raw audio signals, which are then used for the classification task. The proposed solution is evaluated on reputable datasets, including RAVDESS, SHEMO, SAVEE, AESDD, and Emo-DB. The results show the effectiveness of the proposed method on different datasets. Moreover, the model has been used for real-world applications like call center conversations, and the results demonstrate that the model accurately predicts emotions.
翻译:语音是人类表达自我最自然的方式。由于情感本身定义的模糊性,从语音中识别情感是一项复杂的任务。说话人情感识别对于理解人类情感行为至关重要。该任务因说话人多样性、背景噪声、情感复杂性及说话风格差异而极具挑战性,在教育、医疗、客户服务和人机交互等领域具有广泛应用前景。早期研究中,支持向量机、隐马尔可夫模型和K近邻等传统机器学习方法曾被用于SER任务。近年来,深度学习方法逐渐成为主流,卷积神经网络和循环神经网络被广泛应用于SER任务,这些方法的输入多为声谱图及人工设计的特征。本研究探讨基于Transformer的自监督模型Wav2Vec2和HuBERT在说话人语音情感识别中的应用。这些模型能够从原始音频信号中自动提取特征,进而用于分类任务。我们在RAVDESS、SHEMO、SAVEE、AESDD和Emo-DB等多个权威数据集上评估了所提出的方法,结果表明该方法在不同数据集上均表现出有效性。此外,该模型已应用于呼叫中心对话等实际场景,实验结果证明其能够准确预测情感状态。