The rapid development of neural text-to-speech (TTS) systems enabled its usage in other areas of natural language processing such as automatic speech recognition (ASR) or spoken language translation (SLT). Due to the large number of different TTS architectures and their extensions, selecting which TTS systems to use for synthetic data creation is not an easy task. We use the comparison of five different TTS decoder architectures in the scope of synthetic data generation to show the impact on CTC-based speech recognition training. We compare the recognition results to computable metrics like NISQA MOS and intelligibility, finding that there are no clear relations to the ASR performance. We also observe that for data generation auto-regressive decoding performs better than non-autoregressive decoding, and propose an approach to quantify TTS generalization capabilities.
翻译:神经文本转语音(TTS)系统的快速发展使其能够应用于自然语言处理的其他领域,如自动语音识别(ASR)或口语翻译(SLT)。由于存在大量不同的TTS架构及其扩展,选择使用哪些TTS系统来创建合成数据并非易事。我们通过比较五种不同的TTS解码器架构在合成数据生成范围内的表现,以展示其对基于CTC的语音识别训练的影响。我们将识别结果与可计算的指标(如NISQA MOS和可懂度)进行比较,发现它们与ASR性能之间没有明确的关系。我们还观察到,对于数据生成,自回归解码优于非自回归解码,并提出了一种量化TTS泛化能力的方法。