Achieving pronunciation proficiency in a second language (L2) remains a challenge, despite the development of Computer-Assisted Pronunciation Training (CAPT) systems. Traditional CAPT systems often provide unintuitive feedback that lacks actionable guidance, limiting its effectiveness. Recent advancements in audio-language models (ALMs) offer the potential to enhance these systems by providing more user-friendly feedback. In this work, we investigate ALMs for chat-based pronunciation training by introducing L2-Arctic-plus, an English dataset with detailed error explanations and actionable suggestions for improvement. We benchmark cascaded ASR+LLMs and existing ALMs on this dataset, specifically in detecting mispronunciation and generating actionable feedback. To improve the performance, we further propose to instruction-tune ALMs on L2-Arctic-plus. Experimental results demonstrate that our instruction-tuned models significantly outperform existing baselines on mispronunciation detection and suggestion generation in terms of both objective and human evaluation, highlighting the value of the proposed dataset.
翻译:尽管计算机辅助发音训练系统不断发展,但实现第二语言的发音熟练度仍是一项挑战。传统CAPT系统通常提供缺乏直观可操作指导的反馈,限制了其有效性。音频-语言模型的最新进展为通过提供更用户友好的反馈来增强这些系统提供了潜力。本研究通过引入L2-Arctic-plus数据集——一个包含详细错误解释及可操作改进建议的英语数据集,探索ALM在基于聊天的发音训练中的应用。我们在该数据集上对级联ASR+LLM架构及现有ALM模型进行了基准测试,重点关注发音错误检测与可操作反馈生成能力。为提升性能,我们进一步提出基于L2-Arctic-plus对ALM进行指令微调。实验结果表明,经过指令微调的模型在发音错误检测与建议生成任务上,无论是客观指标还是人工评估均显著优于现有基线模型,凸显了所提出数据集的价值。