Recent advances in Large Language Models (LLMs) have highlighted the need for robust, comprehensive, and challenging benchmarks. Yet, research on evaluating their Emotional Intelligence (EI) is considerably limited. Existing benchmarks have two major shortcomings: first, they mainly focus on emotion recognition, neglecting essential EI capabilities such as emotion regulation and thought facilitation through emotion understanding; second, they are primarily constructed from existing datasets, which include frequent patterns, explicit information, and annotation errors, leading to unreliable evaluation. We propose EmoBench, a benchmark that draws upon established psychological theories and proposes a comprehensive definition for machine EI, including Emotional Understanding and Emotional Application. EmoBench includes a set of 400 hand-crafted questions in English and Chinese, which are meticulously designed to require thorough reasoning and understanding. Our findings reveal a considerable gap between the EI of existing LLMs and the average human, highlighting a promising direction for future research. Our code and data are publicly available at https://github.com/Sahandfer/EmoBench.
翻译:近年来,大语言模型的进展凸显了对稳健、全面且具有挑战性基准测试的需求。然而,评估其情商的研究却相当有限。现有基准存在两个主要缺陷:首先,它们主要关注情绪识别,忽视了情商的关键能力,如情绪调节以及通过情绪理解促进思维;其次,它们主要从现有数据集构建,这些数据集包含频繁模式、显性信息和标注错误,导致评估不可靠。我们提出了EmoBench,这是一个基于成熟心理学理论的基准,并为机器情商提出了一个全面的定义,包括情绪理解和情绪应用。EmoBench包含一套精心设计的400个英文和中文问题,这些问题需要深入的推理和理解。我们的研究结果表明,现有大语言模型的情商与普通人类之间存在显著差距,这为未来研究指出了一个有前景的方向。我们的代码和数据已在 https://github.com/Sahandfer/EmoBench 公开提供。