Speech tokenizers are foundational to speech language models, yet existing approaches face two major challenges: (1) balancing trade-offs between encoding semantics for understanding and acoustics for reconstruction, and (2) achieving low bit rates and low token rates. We propose Speech Diffusion Tokenizer (SiTok), a diffusion autoencoder that jointly learns semantic-rich representations through supervised learning and enables high-fidelity audio reconstruction with diffusion. We scale SiTok to 1.6B parameters and train it on 2 million hours of speech. Experiments show that SiTok outperforms strong baselines on understanding, reconstruction and generation tasks, at an extremely low token rate of $12.5$ Hz and a bit-rate of 200 bits-per-second.
翻译:语音分词器是语音语言模型的基础,然而现有方法面临两大挑战:(1) 在编码语义理解与声学重建之间难以权衡;(2) 难以同时实现低比特率与低词元率。本文提出Speech Diffusion Tokenizer (SiTok),这是一种扩散自编码器,通过监督学习联合学习语义丰富的表示,并利用扩散过程实现高保真音频重建。我们将SiTok扩展至16亿参数,并在200万小时语音数据上进行训练。实验表明,在极低的12.5 Hz词元率和200比特/秒的比特率下,SiTok在理解、重建与生成任务上均优于现有基线模型。