Spurred by the demand for interpretable models, research on eXplainable AI for language technologies has experienced significant growth, with feature attribution methods emerging as a cornerstone of this progress. While prior work in NLP explored such methods for classification tasks and textual applications, explainability intersecting generation and speech is lagging, with existing techniques failing to account for the autoregressive nature of state-of-the-art models and to provide fine-grained, phonetically meaningful explanations. We address this gap by introducing Spectrogram Perturbation for Explainable Speech-to-text Generation (SPES), a feature attribution technique applicable to sequence generation tasks with autoregressive models. SPES provides explanations for each predicted token based on both the input spectrogram and the previously generated tokens. Extensive evaluation on speech recognition and translation demonstrates that SPES generates explanations that are faithful and plausible to humans.
翻译:随着对可解释模型需求的增长,面向语言技术的可解释人工智能研究取得了显著进展,其中特征归因方法已成为这一进展的基石。尽管自然语言处理领域的先前研究探索了针对分类任务与文本应用的特征归因方法,但涉及生成任务与语音模态的可解释性研究仍相对滞后,现有技术既未能充分考虑先进模型的自回归特性,也无法提供细粒度且具有语音学意义的解释。为填补这一空白,我们提出了面向可解释语音到文本生成的频谱图扰动方法(SPES),这是一种适用于自回归模型序列生成任务的特征归因技术。SPES能够基于输入频谱图与已生成的历史标记,为每个预测标记提供解释。在语音识别与语音翻译任务上的广泛实验表明,SPES生成的解释既忠实于模型决策,又对人类具有合理的可理解性。