We introduce SLED, an alternative approach to speech language modeling by encoding speech waveforms into sequences of continuous latent representations and modeling them autoregressively using an energy distance objective. The energy distance offers an analytical measure of the distributional gap by contrasting simulated and target samples, enabling efficient training to capture the underlying continuous autoregressive distribution. By bypassing reliance on residual vector quantization, SLED avoids discretization errors and eliminates the need for the complicated hierarchical architectures common in existing speech language models. It simplifies the overall modeling pipeline while preserving the richness of speech information and maintaining inference efficiency. Empirical results demonstrate that SLED achieves strong performance in both zero-shot and streaming speech synthesis, showing its potential for broader applications in general-purpose speech language models.
翻译:我们提出了SLED,一种替代性的语音语言建模方法,该方法通过将语音波形编码为连续潜在表示的序列,并使用能量距离目标对其进行自回归建模。能量距离通过对比模拟样本与目标样本,提供了分布差距的解析度量,从而能够高效训练以捕捉底层的连续自回归分布。通过绕过对残差向量量化的依赖,SLED避免了离散化误差,并消除了现有语音语言模型中常见的复杂分层架构的需求。它简化了整体建模流程,同时保留了语音信息的丰富性并保持了推理效率。实证结果表明,SLED在零样本和流式语音合成中均实现了强劲的性能,展现了其在通用语音语言模型中更广泛应用的潜力。