Speech quality assessment typically requires evaluating audio from multiple aspects, such as mean opinion score (MOS) and speaker similarity (SIM) etc., which can be challenging to cover using one small model designed for a single task. In this paper, we propose leveraging recently introduced auditory large language models (LLMs) for automatic speech quality assessment. By employing task-specific prompts, auditory LLMs are finetuned to predict MOS, SIM and A/B testing results, which are commonly used for evaluating text-to-speech systems. Additionally, the finetuned auditory LLM is able to generate natural language descriptions assessing aspects like noisiness, distortion, discontinuity, and overall quality, providing more interpretable outputs. Extensive experiments have been performed on the NISQA, BVCC, SOMOS and VoxSim speech quality datasets, using open-source auditory LLMs such as SALMONN, Qwen-Audio, and Qwen2-Audio. For the natural language descriptions task, a commercial model Google Gemini 1.5 Pro is also evaluated. The results demonstrate that auditory LLMs achieve competitive performance compared to state-of-the-art task-specific small models in predicting MOS and SIM, while also delivering promising results in A/B testing and natural language descriptions. Our data processing scripts and finetuned model checkpoints will be released upon acceptance.
翻译:语音质量评估通常需要从多个维度对音频进行评价,例如平均意见得分(MOS)和说话人相似度(SIM)等,而专为单一任务设计的小型模型往往难以全面覆盖这些方面。本文提出利用近期引入的听觉大语言模型(LLMs)进行自动语音质量评估。通过采用任务特定的提示,我们对听觉LLMs进行微调,以预测MOS、SIM以及常用于评估文本转语音系统的A/B测试结果。此外,微调后的听觉LLM能够生成评估噪声、失真、不连续性及整体质量等方面的自然语言描述,从而提供更具可解释性的输出。我们在NISQA、BVCC、SOMOS和VoxSim等语音质量数据集上进行了大量实验,使用了SALMONN、Qwen-Audio和Qwen2-Audio等开源听觉LLMs。对于自然语言描述任务,我们还评估了商业模型Google Gemini 1.5 Pro。结果表明,在预测MOS和SIM方面,听觉LLMs与最先进的特定任务小型模型相比具有竞争力,同时在A/B测试和自然语言描述任务中也展现出良好的性能。我们的数据处理脚本和微调后的模型检查点将在论文录用后公开发布。