While the automatic evaluation of omni-modal large models (OLMs) is essential, assessing empathy remains a significant challenge due to its inherent affectivity. To investigate this challenge, we introduce AEQ-Bench (Audio Empathy Quotient Benchmark), a novel benchmark to systematically assess two core empathetic capabilities of OLMs: (i) generating empathetic responses by comprehending affective cues from multi-modal inputs (audio + text), and (ii) judging the empathy of audio responses without relying on text transcription. Compared to existing benchmarks, AEQ-Bench incorporates two novel settings that vary in context specificity and speech tone. Comprehensive assessment across linguistic and paralinguistic metrics reveals that (1) OLMs trained with audio output capabilities generally outperformed models with text-only outputs, and (2) while OLMs align with human judgments for coarse-grained quality assessment, they remain unreliable for evaluating fine-grained paralinguistic expressiveness.
翻译:尽管全模态大模型(OLMs)的自动评估至关重要,但由于共情本身固有的情感属性,对其评估仍是一个重大挑战。为探究这一挑战,我们引入了AEQ-Bench(音频共情商数基准),这是一个用于系统评估OLMs两项核心共情能力的新型基准:(i)通过理解多模态输入(音频+文本)中的情感线索生成共情回应,以及(ii)在不依赖文本转录的情况下评判音频回应的共情水平。与现有基准相比,AEQ-Bench引入了两种在语境具体性和语音语调上各具差异的新颖设置。通过语言学和副语言学指标的全面评估发现:(1)具备音频输出能力的OLMs通常优于仅具备文本输出能力的模型;(2)虽然OLMs在粗粒度质量评估方面与人类判断一致,但在评估细粒度的副语言表达力方面仍不可靠。