Speech encompasses a wealth of information, including but not limited to content, paralinguistic, and environmental information. This comprehensive nature of speech significantly impacts communication and is crucial for human-computer interaction. Chat-Oriented Large Language Models (LLMs), known for their general-purpose assistance capabilities, have evolved to handle multi-modal inputs, including speech. Although these models can be adept at recognizing and analyzing speech, they often fall short of generating appropriate responses. We argue that this is due to the lack of principles on task definition and model development, which requires open-source datasets and metrics suitable for model evaluation. To bridge the gap, we present SD-Eval, a benchmark dataset aimed at multidimensional evaluation of spoken dialogue understanding and generation. SD-Eval focuses on paralinguistic and environmental information and includes 7,303 utterances, amounting to 8.76 hours of speech data. The data is aggregated from eight public datasets, representing four perspectives: emotion, accent, age, and background sound. To assess the SD-Eval benchmark dataset, we implement three different models and construct a training set following a similar process as SD-Eval. The training set contains 1,052.72 hours of speech data and 724.4k utterances. We also conduct a comprehensive evaluation using objective evaluation methods (e.g. BLEU and ROUGE), subjective evaluations and LLM-based metrics for the generated responses. Models conditioned with paralinguistic and environmental information outperform their counterparts in both objective and subjective measures. Moreover, experiments demonstrate LLM-based metrics show a higher correlation with human evaluation compared to traditional metrics. We open-source SD-Eval at https://github.com/amphionspace/SD-Eval.
翻译:语音蕴含丰富信息,包括但不限于内容、副语言及环境信息。语音的这种综合性特征深刻影响着沟通交流,并对人机交互至关重要。以通用辅助能力著称的对话型大语言模型已发展至能够处理包括语音在内的多模态输入。尽管这些模型在语音识别与分析方面表现优异,却常在生成恰当回应时存在不足。我们认为这归因于任务定义与模型开发原则的缺失,亟需适用于模型评估的开源数据集与度量标准。为填补此空白,我们提出SD-Eval——一个面向口语对话理解与生成多维评估的基准数据集。SD-Eval聚焦副语言与环境信息,包含7,303条话语,总计8.76小时语音数据。该数据聚合自八个公开数据集,涵盖情感、口音、年龄及背景音四个维度。为评估SD-Eval基准数据集,我们实现了三种不同模型,并按照与SD-Eval相似的流程构建训练集。该训练集包含1,052.72小时语音数据及724.4k条话语。我们采用客观评估方法(如BLEU和ROUGE)、主观评估及基于LLM的生成回应度量进行全面评测。实验表明,融合副语言与环境信息的模型在客观与主观指标上均优于基线模型。此外,基于LLM的度量指标相较于传统指标与人工评估呈现更高相关性。我们在https://github.com/amphionspace/SD-Eval开源SD-Eval数据集。