Text-to-Speech (TTS) systems face ongoing challenges in processing complex linguistic features, handling polyphonic expressions, and producing natural-sounding multilingual speech - capabilities that are crucial for future AI applications. In this paper, we present Fish-Speech, a novel framework that implements a serial fast-slow Dual Autoregressive (Dual-AR) architecture to enhance the stability of Grouped Finite Scalar Vector Quantization (GFSQ) in sequence generation tasks. This architecture improves codebook processing efficiency while maintaining high-fidelity outputs, making it particularly effective for AI interactions and voice cloning. Fish-Speech leverages Large Language Models (LLMs) for linguistic feature extraction, eliminating the need for traditional grapheme-to-phoneme (G2P) conversion and thereby streamlining the synthesis pipeline and enhancing multilingual support. Additionally, we developed FF-GAN through GFSQ to achieve superior compression ratios and near 100\% codebook utilization. Our approach addresses key limitations of current TTS systems while providing a foundation for more sophisticated, context-aware speech synthesis. Experimental results show that Fish-Speech significantly outperforms baseline models in handling complex linguistic scenarios and voice cloning tasks, demonstrating its potential to advance TTS technology in AI applications. The implementation is open source at \href{https://github.com/fishaudio/fish-speech}{https://github.com/fishaudio/fish-speech}.
翻译:文本到语音(TTS)系统在处理复杂语言特征、处理多音字表达以及生成自然流畅的多语言语音方面持续面临挑战——这些能力对未来人工智能应用至关重要。本文提出Fish-Speech,一种新颖的框架,它采用串行快慢双自回归(Dual-AR)架构来增强分组有限标量向量量化(GFSQ)在序列生成任务中的稳定性。该架构提高了码本处理效率,同时保持了高保真输出,使其在人工智能交互和语音克隆方面特别有效。Fish-Speech利用大型语言模型(LLMs)进行语言特征提取,无需传统的字形到音素(G2P)转换,从而简化了合成流程并增强了多语言支持。此外,我们通过GFSQ开发了FF-GAN,实现了卓越的压缩比和接近100%的码本利用率。我们的方法解决了当前TTS系统的关键局限性,并为更复杂、上下文感知的语音合成奠定了基础。实验结果表明,Fish-Speech在处理复杂语言场景和语音克隆任务方面显著优于基线模型,展示了其在推动人工智能应用中TTS技术发展的潜力。该实现已在\href{https://github.com/fishaudio/fish-speech}{https://github.com/fishaudio/fish-speech}开源。