Speech emotion recognition (SER) is a pivotal technology for human-computer interaction systems. However, 80.77% of SER papers yield results that cannot be reproduced. We develop EMO-SUPERB, short for EMOtion Speech Universal PERformance Benchmark, which aims to enhance open-source initiatives for SER. EMO-SUPERB includes a user-friendly codebase to leverage 15 state-of-the-art speech self-supervised learning models (SSLMs) for exhaustive evaluation across six open-source SER datasets. EMO-SUPERB streamlines result sharing via an online leaderboard, fostering collaboration within a community-driven benchmark and thereby enhancing the development of SER. On average, 2.58% of annotations are annotated using natural language. SER relies on classification models and is unable to process natural languages, leading to the discarding of these valuable annotations. We prompt ChatGPT to mimic annotators, comprehend natural language annotations, and subsequently re-label the data. By utilizing labels generated by ChatGPT, we consistently achieve an average relative gain of 3.08% across all settings.
翻译:语音情感识别是人与计算机交互系统中的关键技术。然而,80.77%的SER论文产生的结果无法复现。我们开发了EMO-SUPERB(情感语音通用性能基准的简称),旨在增强SER领域的开源倡议。EMO-SUPERB包含一个用户友好的代码库,可利用15种最先进的语音自监督学习模型对六个开源SER数据集进行详尽评估。EMO-SUPERB通过在线排行榜简化结果共享,促进社区驱动基准下的协作,从而推动SER的发展。平均而言,2.58%的注释采用自然语言进行标注。由于SER依赖分类模型且无法处理自然语言,这些有价值的注释常被丢弃。我们促使ChatGPT模仿标注者理解自然语言注释,并重新标注数据。通过使用ChatGPT生成的标签,我们在所有设置中平均实现了3.08%的相对增益。