A long-standing question in automatic speech recognition research is how to attribute errors to the ability of a model to model the acoustics, versus its ability to leverage higher-order context (lexicon, morphology, syntax, semantics). We validate a novel approach which models error rates as a function of relative textual predictability, and yields a single number, $k$, which measures the effect of textual predictability on the recognizer. We use this method to demonstrate that a Wav2Vec 2.0-based model makes greater stronger use of textual context than a hybrid ASR model, in spite of not using an explicit language model, and also use it to shed light on recent results demonstrating poor performance of standard ASR systems on African-American English. We demonstrate that these mostly represent failures of acoustic--phonetic modelling. We show how this approach can be used straightforwardly in diagnosing and improving ASR.
翻译:自动语音识别研究中长期存在的一个核心问题是如何区分识别错误源于模型对声学特征的建模能力不足,还是其利用高阶上下文(词汇、形态、句法、语义)的能力有限。本文验证了一种创新方法,该方法将错误率建模为相对文本可预测性的函数,并推导出单一参数 $k$ 用以量化文本可预测性对识别系统的影响。应用此方法,我们证明基于 Wav2Vec 2.0 的模型比混合型 ASR 模型更有效地利用文本上下文——尽管前者未使用显式语言模型;同时该方法也揭示了近期研究中标准 ASR 系统在非裔美国人英语上表现不佳的本质。我们证实这些错误主要源于声学-语音建模的失效,并展示了该方法如何直接应用于 ASR 系统的诊断与优化。