We present a novel text-to-speech (TTS) system, namely SupertonicTTS, for improved scalability and efficiency in speech synthesis. SupertonicTTS is comprised of three components: a speech autoencoder for continuous latent representation, a text-to-latent module leveraging flow-matching for text-to-latent mapping, and an utterance-level duration predictor. To enable a lightweight architecture, we employ a low-dimensional latent space, temporal compression of latents, and ConvNeXt blocks. We further simplify the TTS pipeline by operating directly on raw character-level text and employing cross-attention for text-speech alignment, thus eliminating the need for grapheme-to-phoneme (G2P) modules and external aligners. In addition, we introduce context-sharing batch expansion that accelerates loss convergence and stabilizes text-speech alignment. Experimental results demonstrate that SupertonicTTS achieves competitive performance while significantly reducing architectural complexity and computational overhead compared to contemporary TTS models. Audio samples demonstrating the capabilities of SupertonicTTS are available at: https://supertonictts.github.io/.
翻译:本文提出了一种新颖的文本转语音(TTS)系统——SupertonicTTS,旨在提升语音合成的可扩展性与效率。SupertonicTTS由三个核心组件构成:用于连续潜在表示的语音自编码器、利用流匹配技术实现文本到潜在映射的文本-潜在模块,以及一个语句级时长预测器。为实现轻量级架构,我们采用了低维潜在空间、潜在表示的时间压缩以及ConvNeXt模块。通过直接处理原始字符级文本并利用交叉注意力机制进行文本-语音对齐,我们进一步简化了TTS流程,从而无需依赖字形到音素(G2P)转换模块和外部对齐工具。此外,我们引入了上下文共享的批次扩展技术,以加速损失收敛并稳定文本-语音对齐。实验结果表明,与当前主流TTS模型相比,SupertonicTTS在保持竞争力的性能的同时,显著降低了架构复杂度和计算开销。展示SupertonicTTS能力的音频样本可在以下网址获取:https://supertonictts.github.io/。