Emotion recognition from human speech is a critical enabler for socially aware conversational AI. However, while most prior work frames emotion recognition as a categorical classification problem, real-world affective states are often ambiguous, overlapping, and context-dependent, posing significant challenges for both annotation and automatic modeling. Recent large-scale audio language models (ALMs) offer new opportunities for nuanced affective reasoning without explicit emotion supervision, but their capacity to handle ambiguous emotions remains underexplored. At the same time, advances in inference-time techniques such as test-time scaling (TTS) have shown promise for improving generalization and adaptability in hard NLP tasks, but their relevance to affective computing is still largely unknown. In this work, we introduce the first benchmark for ambiguous emotion recognition in speech with ALMs under test-time scaling. Our evaluation systematically compares eight state-of-the-art ALMs and five TTS strategies across three prominent speech emotion datasets. We further provide an in-depth analysis of the interaction between model capacity, TTS, and affective ambiguity, offering new insights into the computational and representational challenges of ambiguous emotion understanding. Our benchmark establishes a foundation for developing more robust, context-aware, and emotionally intelligent speech-based AI systems, and highlights key future directions for bridging the gap between model assumptions and the complexity of real-world human emotion.
翻译:从人类语音中识别情感是实现社交感知对话人工智能的关键使能技术。然而,尽管先前大多数研究将情感识别视为分类问题,现实世界中的情感状态往往是模糊的、相互重叠且依赖于语境的,这为标注和自动建模带来了重大挑战。近期的大规模音频-语言模型为无需显式情感监督的细致情感推理提供了新机遇,但其处理模糊情感的能力仍未得到充分探索。与此同时,测试时缩放等推理时技术的进展已显示出在困难自然语言处理任务中提升泛化性与适应性的潜力,但其与情感计算的相关性在很大程度上仍是未知的。本研究首次构建了基于测试时缩放的音频-语言模型语音模糊情感识别基准。我们的评估系统比较了八种前沿音频-语言模型和五种测试时缩放策略在三个主流语音情感数据集上的表现。我们进一步深入分析了模型容量、测试时缩放与情感模糊性之间的相互作用,为模糊情感理解的计算与表征挑战提供了新的见解。该基准为开发更鲁棒、更具语境感知能力和情感智能的语音人工智能系统奠定了基础,并指明了弥合模型假设与真实世界人类情感复杂性之间差距的关键未来研究方向。