Data augmentation is a widely adopted technique utilized to improve the robustness of automatic speech recognition (ASR). Employing a fixed data augmentation strategy for all training data is a common practice. However, it is important to note that there can be variations in factors such as background noise, speech rate, etc. among different samples within a single training batch. By using a fixed augmentation strategy, there is a risk that the model may reach a suboptimal state. In addition to the risks of employing a fixed augmentation strategy, the model's capabilities may differ across various training stages. To address these issues, this paper proposes the method of sample-adaptive data augmentation with progressive scheduling(PS-SapAug). The proposed method applies dynamic data augmentation in a two-stage training approach. It employs hybrid normalization to compute sample-specific augmentation parameters based on each sample's loss. Additionally, the probability of augmentation gradually increases throughout the training progression. Our method is evaluated on popular ASR benchmark datasets, including Aishell-1 and Librispeech-100h, achieving up to 8.13% WER reduction on LibriSpeech-100h test-clean, 6.23% on test-other, and 5.26% on AISHELL-1 test set, which demonstrate the efficacy of our approach enhancing performance and minimizing errors.
翻译:数据增强是一种广泛采用的技术,用于提高自动语音识别(ASR)的鲁棒性。对所有训练数据采用固定的数据增强策略是一种常见做法。然而,值得注意的是,单个训练批次中的不同样本之间可能存在背景噪声、语速等因素的差异。使用固定的增强策略,模型可能面临陷入次优状态的风险。除了采用固定增强策略的风险外,模型在不同训练阶段的能力也可能存在差异。为解决这些问题,本文提出了样本自适应数据增强与渐进式调度(PS-SapAug)方法。该方法在双阶段训练框架中应用动态数据增强。它采用混合归一化,根据每个样本的损失计算样本特定的增强参数。此外,增强的概率在整个训练过程中逐渐增加。我们在主流的ASR基准数据集上评估了我们的方法,包括Aishell-1和Librispeech-100h,在LibriSpeech-100h的test-clean集上实现了高达8.13%的词错误率降低,在test-other集上降低了6.23%,在AISHELL-1测试集上降低了5.26%,这证明了我们的方法在提升性能和最小化错误方面的有效性。