Neural codec language models have achieved state-of-the-art performance in text-to-speech (TTS) synthesis, leveraging scalable architectures like autoregressive transformers and large-scale speech datasets. By framing voice cloning as a prompt continuation task, these models excel at cloning voices from short audio samples. However, this approach is limited in its ability to handle numerous or lengthy speech excerpts, since the concatenation of source and target speech must fall within the maximum context length which is determined during training. In this work, we introduce Lina-Speech, a model that replaces traditional self-attention mechanisms with emerging recurrent architectures like Gated Linear Attention (GLA). Building on the success of initial-state tuning on RWKV, we extend this technique to voice cloning, enabling the use of multiple speech samples and full utilization of the context window in synthesis. This approach is fast, easy to deploy, and achieves performance comparable to fine-tuned baselines when the dataset size ranges from 3 to 15 minutes. Notably, Lina-Speech matches or outperforms state-of-the-art baseline models, including some with a parameter count up to four times higher or trained in an end-to-end style. We release our code and checkpoints. Audio samples are available at https://theodorblackbird.github.io/blog/demo_lina/.
翻译:神经编解码器语言模型通过利用自回归Transformer等可扩展架构和大规模语音数据集,在文本到语音合成领域取得了最先进的性能。通过将语音克隆构建为提示延续任务,这些模型能够出色地从短音频样本中克隆语音。然而,该方法在处理大量或长篇语音片段时存在局限,因为源语音与目标语音的拼接必须落在训练时确定的最大上下文长度内。在本工作中,我们提出了Lina-Speech模型,该模型使用门控线性注意力等新兴循环架构替代传统的自注意力机制。基于在RWKV模型上初始状态调优的成功经验,我们将此技术扩展至语音克隆任务,从而能够在合成过程中利用多个语音样本并充分使用上下文窗口。该方法速度快、易于部署,并且在数据集规模为3至15分钟时,其性能可与经过微调的基线模型相媲美。值得注意的是,Lina-Speech达到甚至超越了最先进的基线模型,其中包括参数量高达其四倍或采用端到端训练方式的模型。我们公开了代码与模型检查点。音频样本可在 https://theodorblackbird.github.io/blog/demo_lina/ 获取。