Spurred by the demand for interpretable models, research on eXplainable AI for language technologies has experienced significant growth, with feature attribution methods emerging as a cornerstone of this progress. While prior work in NLP explored such methods for classification tasks and textual applications, explainability intersecting generation and speech is lagging, with existing techniques failing to account for the autoregressive nature of state-of-the-art models and to provide fine-grained, phonetically meaningful explanations. We address this gap by introducing Spectrogram Perturbation for Explainable Speech-to-text Generation (SPES), a feature attribution technique applicable to sequence generation tasks with autoregressive models. SPES provides explanations for each predicted token based on both the input spectrogram and the previously generated tokens. Extensive evaluation on speech recognition and translation demonstrates that SPES generates explanations that are faithful and plausible to humans.
翻译:随着对可解释模型需求的增长,面向语言技术的可解释人工智能研究取得了显著进展,其中特征归因方法已成为该领域发展的基石。尽管自然语言处理领域的先前研究已针对分类任务和文本应用探索了此类方法,但涉及生成任务与语音处理的交叉领域的可解释性研究仍相对滞后,现有技术既未能充分考虑最先进模型的自回归特性,也无法提供细粒度且具有语音学意义的解释。为弥补这一不足,我们提出了面向可解释语音到文本生成的声谱图扰动方法(SPES),这是一种适用于自回归模型序列生成任务的特征归因技术。SPES能够基于输入声谱图和已生成的先前标记,为每个预测标记提供解释。在语音识别和语音翻译任务上的广泛评估表明,SPES生成的解释既忠实于模型决策,又对人类具有合理的可理解性。