In this paper, we explore the untapped potential of Whisper, a well-established automatic speech recognition (ASR) foundation model, in the context of L2 spoken language assessment (SLA). Unlike prior studies that extrinsically analyze transcriptions produced by Whisper, our approach goes a step further to probe its latent capabilities by extracting acoustic and linguistic features from hidden representations. With only a lightweight classifier being trained on top of Whisper's intermediate and final outputs, our method achieves strong performance on the GEPT picture-description dataset, outperforming existing cutting-edge baselines, including a multimodal approach. Furthermore, by incorporating image and text-prompt information as auxiliary relevance cues, we demonstrate additional performance gains. Finally, we conduct an in-depth analysis of Whisper's embeddings, which reveals that, even without task-specific fine-tuning, the model intrinsically encodes both ordinal proficiency patterns and semantic aspects of speech, highlighting its potential as a powerful foundation for SLA and other spoken language understanding tasks.
翻译:本文探讨了成熟的自动语音识别(ASR)基础模型Whisper在第二语言口语评估(SLA)中尚未开发的潜力。与先前研究仅外部分析Whisper生成转录文本的方法不同,我们的方法更进一步,通过从隐藏表示中提取声学和语言特征来探究其潜在能力。仅在Whisper的中间及最终输出之上训练一个轻量级分类器,我们的方法在GEPT图片描述数据集上取得了强劲的性能表现,超越了包括多模态方法在内的现有前沿基线模型。此外,通过引入图像和文本提示信息作为辅助相关线索,我们展示了额外的性能提升。最后,我们对Whisper的嵌入表示进行了深入分析,结果表明即使未经任务特定的微调,该模型本质上已编码了口语的序数能力模式与语义特征,凸显了其作为SLA及其他口语理解任务强大基础模型的潜力。